Explainability of AI in Operation – Legal Aspects


One of the main concerns of researchers and practitioners in the field of artificial intelligence is the issue of explainability of AI systems. We want to keep up with AI’s progress, but at the same time, we want to control whether AI’s outcomes meet our expectations. We do not want to and should not settle for a general “now is better than before”. The reasons for this are both systemic and practical. The lack of understanding of how AI works and lack of measuring its results can lead to a situation where AI “gets out of control” in terms of the lack of validity of its actions relative to more or less expected and defined parameters. Or it can be manipulated in a way that is hidden from the public eye. On the other hand, experts already signal the impossibility of “overseeing” the way various AI systems work and make decisions.

In this context, the proposed European Union Regulation on Artificial Intelligence takes a compromise approach, which we will briefly present. We will refer to the EU Regulation on Artificial Intelligence as AIA (Artificial Intelligence Act). The high-risk artificial intelligence system referred to in AIA we will simply call AI (artificial intelligence).

What is Explainability?

According to Webster’s 1913 Dictionary “explainable” means capable of being explained or made plain to the understanding; capable of being interpreted. Explaining, on the other hand, means making something understandable, giving reasons or motives for it. According to the Cambridge Dictionary, the English ‘explain’ means to make something clear or easy to understand by describing or giving information about it. I understand “explainabiliity” as possibility of understanding a given cause-and-effect sequence, i.e. a multi-stage implication.

Explainability can be considered as an aspect of transparency understood as knowing ‘that and how’ something takes place (e.g. that an AI is used) and ‘how’ this something occurs, i.e. how the cognitive-decisional-executive process involving AI takes place.

Getting a little ahead of the argument, we could approach AI explainability in two ways, distinguishing between rational explainability and empirical explainability.

The rational explainability of AI should ideally be reduced to the ability to understand what are the logical links that occur between the successive activities of the system (understanding reasons and consequences).

For the empirical explainability of AI, on the other hand, we could consider high efficiency of AI and possibility of optimisation of AI’s efficiency. Sceptics may argue against empirical explainability with the recently discredited “correlation does not mean causation” and the issue of comparative scale. Nevertheless, the explainability of AI can slide towards the “and yet it turns” direction, as shown in the movie “Minority Report”.

Explainability in the GDPR

While the GDPR does not directly use the concept of ‘explainability’, it is in the GDPR that we should be looking for the origin and guidance of the understanding of this concept.

Article 5(1)(a) GDPR uses the wording of processing “fairly and in a transparent manner in relation to the data subject“. This is the most important principle of the GDPR – the principle of lawfulness, fairness and transparency.

Indirectly, Article 22 of GDPR discusses explainability. This provision, deals with automated decision-making of legal or similar importance. According to Article 22 GDPR, if a decision were to be taken in relation to a person by automated means (e.g. by an AI), that person should at least have the right to obtain ” human intervention on the part of the controller, to express his or her point of view and to contest the decision”. As we wrote in the “Guide to the GDPR ” part of the rights of the data subject against whom an automated decision has been applied is the right to an explanation of the reasons for that decision[1].

Finally, pursuant to Article 13(2)(f) of the GDPR and the twin Article 14(2)(g) of the GDPR, the controller should provide the person with information on the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) GDPR and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.

From these provisions, we can infer the obligation to explain to the data subject the reasons for a specific automated decision (as well as inform them that the decision is being made automatically) – where the decisions were to have legal or similar effects.

Does AI Need To Be Explainable

The concept and issue of the explainability of AI is debated by experts. So does AI need to be explainable?  It appears that AIA does not contain an explicitly expressed requirement of AI’s explainability. AIA uses the term ‘explainable’ only once, in recital 38, in the context of AIs used for law enforcement, and in conjunction with the term ‘transparent’.

Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented.

The concept of “transparency” is more commonly used in the AI Act. The term “transparency” is used fifteen times in the AI Act, with nine instances in the recitals, seven of which are in the context we are discussing. In the text of the provisions, the term “transparency” is used six times, and when we exclude the title of Article 13 of the AI Act – “Transparency and provision of information to users” – it is used five times.

It is important to distinguish between explainability and transparency as understood in the AI Act, and even the concept of “transparency” has its intricacies, which will be discussed further.

Appropriate Transparency

The need for a certain level of transparency is declared in Recital 47 of the AIA.

To address the [1] opacity that may make certain AI systems incomprehensible to or too complex for natural persons, [2] a certain degree of [3] transparency should be required for high-risk AI systems. Users should [4] be able to interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by [5] relevant documentation and [6] instructions of use and include [7] concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate.

From the wording of recital 47, we can already deduce that AIA strongly relativises the notion of transparency (not even explainability) of AI. An AI’s transparency only needs to be of “a certain degree”. Of course, the immediate question is who will assess that “certain” (and then expanded in Article 13 AIA as “sufficient of an appropriate type and degree”) degree of transparency. SPOILER ALERT – at the stage of market admission, it will be a conformity assessment body.

Article 13 AIA Transparency and provision of information to users

Transparency of an AI is directly addressed in Article 13 AIA. According to Article 13(1) AIA,

High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured, with a view to achieving compliance with the relevant obligations of the user and of the provider set out in Chapter 3 of this Title.

Article 13(1) of the AIA imposes, first and foremost on manufacturers, a general obligation to design and develop AI systems in a way that ensures that sufficient, of an appropriate degree and type, transparency of operation. Each word in the above quote carries a significant semantic load. Note that Article 13(1) of the AIA expects “sufficient” transparency of an “appropriate” “type” and “degree”. These are “softening” formulations.

Article 13 of the AIA explicitly mentions only one means of ensuring transparency of the SI, namely …the user manual. The SI manual should contain concise, complete, correct and clear information[2] including:

  1. identity of a provider;
  2. characteristics, capabilities and limitations of AI performance, including:
  3. purpose;
  4. the level of accuracy, robustness and cybersecurity, […], against which the AI has been tested and validated, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity
  5. any known or foreseeable circumstance, related to the use of AI in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights
  6. its performance as regards the persons or groups of persons on which the system is intended to be used;
  7. where applicable, input specifications or any other relevant information on the training, validation and test data sets used, taking into account the (intended) purpose of the artificial intelligence system;
  8. the changes to the AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, if any;
  9. the human oversight measures referred to in Article 14 AIA, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users;
  10. the expected lifetime of the high-risk AI system and any necessary maintenance and care measures to ensure the proper functioning of that AI system, including as regards software updates.

Taking into account the world’s achievements so far in creating and applying user manuals for various types of solutions, including in the field of creating user and technical documentation for IT systems, AI manuals will certainly be interesting.

Most of the AI transparency requirements, however, derive from the reference in Article 13(1) of the AIA to the provisions of Chapter 3 of Title III of the AIA – HIGH RISK INTELLIGENCE SYSTEMS, namely Articles 16 to 29 of the AIA.

Articles 16 to 28 of the AIA describe the requirements for AI providers. From the perspective of AI transparency in the broadest sense, the requirements relevant here are: to produce technical documentation, to keep automatic records of events, to apply a risk management system, to apply a quality management system, to subject an AI to a conformity assessment procedure, to respond to non-conformities, to inform state authorities about non-conformities. These requirements do not explicitly ensure transparency or explainability of AI’s performance, but they should allow for empirical verification of correctness of AI’s performance and to determine causes of AI’s malfunctions. They may therefore contribute to that “certain” degree of transparency.

In passing, it is worth noting Article 16(j) of the AIA, which requires AI providers, upon request by national authorities, to demonstrate the compliance of a high-risk AI system with the requirements laid down in Chapter 2 of Title III of the AIA[3] . This provision appears to be the watered-down equivalent of the accountability principle in Article 5(2) GDPR.

For AI users, in addition to the requirement of not making significant changes to AI according to Article 28(1)(c) AIA (which transforms the user into a provider from the perspective of AIA), the requirements of Article 29 AIA apply. According to this provision, an AI user is obliged to (1) use AI in accordance with the instructions, (2) ensure the adequacy of input data, (3) monitor the use of AI based on the user manual concerning risk, (4) store logs, (5) use information from the user manual to carry out a data protection impact assessment within the meaning of Article 35 GDPR.

From the AI user’s perspective, the requirements of Article 29 of the AIA for assessing the transparency of AI use boil down to an understandable and comprehensible manual[4] , use of the manual, data quality assurance (where possible), log retention. But that is not all.

Where is Explainability?

As is, or at least should be, evident from the description of Article 13 of the AIA, which supposedly deals with transparency, this provision essentially reduces transparency to operating instructions and documentation. Rather, we can find the basis for a certain amount of clarity in Article 14 of the AIA – Human oversight. Article 14(1) of the AIA provides that

High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use

In accordance with Article 14(2) of the AIA, 

Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse[5], in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter

From the combination of both standards, two conclusions can be drawn. First, an AI user should be familiar with how the AI works – or rather, what the results of the AI’s operation are, in order to be able to control the system. Second, the EU legislator is content with the aforementioned empirical explainability. These conclusions are confirmed by the functional description of human oversight measures contained in Article 14(4) AIA. Human oversight measures are intended to:

  1. fully understand the capabilities and limitations of the high-risk AI system and to properly monitor its performance so that signs of anomalies, malfunctions and unexpected performance results can be detected and remedied as quickly as possible;
  2. remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
  3. be able to correctly interpret the high-risk AI system’s output, taking into account in particular the characteristics of the system and the interpretation tools and methods available;
  4. be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system;
  5. be able to intervene on the operation of the high-risk AI system or interrupt the system through a “stop” button or a similar procedure.

Since the means of supervision (either built into the AI interfaces or described, arguably, in the user manual), should enable the human AI supervisor to fully understand the capabilities and limitations of the AI, to properly monitor the performance of the AI and to correctly interpret the outcome of the AI, this presumably means that some level of explainability of the AI performance should be provided.

Who decides

Initially, conformity assessment bodies are to decide whether the operation of an AI is sufficiently transparent. The AIA also provides for a whole feedback mechanism in Title VII Post-market monitoring, information exchange, market surveillance. Arguably, the efficiency of this mechanism will determine the control of the performance of AI within the EU and therefore the requirements for the level of explanatory power of AI.

Article 52 not on topic

The AIA also has Article 52 with the graceful title Transparency Obligations in relation to certain artificial intelligence systems. However, this provision (traditionally), contrary to its title, is negligibly concerned with transparency and in fact not at all with explainability. This provision introduces a requirement to inform us in certain situations that we are not talking to a human being.


It seems that the current drafting of the AIA does not in principle require AI to be explainable, particularly understood as being able to follow the reasoning. Leaving aside wishful incantations along the lines of succinctly but comprehensively, the AIA rather sets out the framework for a system of so-called checks and balances, where efficiency rather than the specific ability to understand the AI’s ‘thinking’ will be measured.

And this approach we even consider reasonable.

Maciej Gawroński

This article is the result of a presentation on AI’s explainability in the use phase that was part of a webinar of the Polish Data Protection Authority and the Polish Prime Minister’s Office held in November 2022

[1] “Guide to the GDPR” ed. M. Gawroński, p. 182, Kluwer Law International B.V., 2019, https://law-store.wolterskluwer.com/s/product/guide-to-the-gdpr/01t0f00000J4I3FAAV

[2] Concise and complete and correct and clear are obviously mutually exclusive. Referring to the popular joke about ‘cheap, fast, good’, the only thing missing is the information that Rabat is the capital of Morocco.

[3] It is a mystery why there is no longer this obligation vis-à-vis the requirements of Chapter 3, when the ‘requirement to demonstrate compliance’ is at the end of the list of requirements of Article 16 – the first in Chapter 3. 

[4] In the event of an incident causing damage or liability, a lack of clarity in the operating instructions may be a circumstance for both parties – the supplier and the SI user

[5] In other words, if a manufacturer can foresee what its AI can be used for even if not in accordance with the offered purpose, the manufacturer should describe the consequences. In practice, this may lead manufacturers to limit the scope of the SI’s use to what will be inapplicable in practice in order to limit their liability.


GP Partners
Gawroński, Biernatowski Sp.K.

T: +48 22 243 49 53

E: info@gppartners.pl

ul. Emilii Plater 28

00-688 Warszawa


Nasz newsletter to stałe źródło bieżących informacji z zakresu technologii, regulacji, sporów i prawa. 

Obsługa prawna – GP Partners
Ilustracja do „Cyberiady” Stanisława Lema, Daniel Mróz ©za zgodą Łucji Mróz-Raynoch