Logo Jst No Paddings MiniLogo Jst No Paddings MiniLogo Jst No Paddings MiniLogo Jst No Paddings Mini
  • HOME
  • Services
  • E-learning
  • Blog
  • About me
  • Contact
0
English
  • Czech

Legal aspects of AI and robotics

Categories
  • Expertising
Tags

Yesterday I attended an interesting conference on the topic of AI and legal liability. The speakers were really selected:

  • Vladimír Smejkal, CIIRC (Czech Institute of Informatics, Robotics and Cybernetics, CTU in Prague)
  • Vladimír Mařík, CIIRC
  • Tomáš Sokol, AK Sokol, Novák, Trojan, Doleček and partners
  • Michal Sýkora, Ph.D., AK Gřivna & Šmerda
  • JUDr. Petr Šustek, Ph.D., Department of Health Law, Faculty of Law, Charles University
  • JUDr. Milan Hodás, Ph. D., State Secretary of the Ministry of Justice of the Slovak Republic & Faculty of Law, Comenius University in Bratislava & Institute of State and Law, Slovak Academy of Sciences
  • Jiří Mulák, Ph.D., Department of Criminal Law, Faculty of Law, Charles University]
  • And others

And what did I learn? Well, mainly that AI does not have legal personality. It is difficult to fight it when we cannot see who is on the back of the screen. This is the case with various systems that only process data (e.g. ChatGBT, DeepSeek, M365 Copilot… ). In the case where AI can interact with the outside world, autonomous vehicles, various robots, image analysis with AI, etc., then it is more complicated.

From an ethical point of view, AI should ensure:

  • The principle of non-harm
  • The principle of benefit
  • The principle of respect for human autonomy
  • The principle of justice
  • The principle of explainability

Since machines cannot have artificial consciousness and cannot handle social interactions, they cannot respect morality or ethics. It will therefore be enough for them to function in accordance with the legal order.

The manufacturer of an AI product is liable for damage caused by a defective product, even if the injured party has not proven negligence or intentional conduct. Manufacturers can be held liable for damage caused by defective AI technologies, without having to look for a specific culprit.

By definition, a defect is a property of the purchased goods that does not serve the usual purpose or the purpose evident from the contract. A defect is understood not only as a technical or design defect, but also as an incorrect functioning of the algorithm, inadequate training data or insufficient instructions for using the AI.

The traditional construction of product (objective) liability assumes that a product with a certain defect can be identified at the time of its launch on the market. AI systems can significantly change the relationship “manufacturer-user-damage“. In AI, a defect can manifest itself in addition to classic defects originating from the design, programming or production of the system, but also as a defect resulting from incorrect use of the system, insufficient software updates or inappropriate interaction with the user or the environment, or from the system’s self-learning.

In such a case, the question arises whether it is an original defect or whether the defect occurred because of a subsequent change in the system’s behaviour after it was handed over to the user. For systems with higher-generation AI, finding a causal chain between causes and effects will be a major problem.

If I imagine it in practice, Metalco presented a new hardness tester where the hardness value is evaluated by AI. My first reaction after watching the presentation was: great, that’s perfect, finally all doubts are gone. But after yesterday’s legal interpretation, I’m not so sure about that anymore.

Fig. 1 – Training data for hardness evaluation using AI – presentation Metalko, REVOLUTIONARY IMAGE EVALUATION IN HARDNESS TESTING WITH AI TECHNLOGY, Thomas Strohmer, Prague 2025

The evaluated hardness is the subject of the heat treatment result transfer and has the value of the agreed and contracted quality parameter. By measuring the hardness, the heat treater confirms that the customer’s specification was met without deviation or with deviation. However, for this measurement, it uses a hardness tester with AI, with image analysis. But what about that next?

In all cases, it is necessary to prove that the measurement was carried out in accordance with the standard, e.g. for the Vickers method according to ISO 6507, 1-3, or ASTM E384.

The following cases may occur.

  1. AI is a closed system supplied by the hardness tester manufacturer, and the user does not have the ability to change it or add training data
  2. AI is an open system and the user can add their own training data

If in case 1 the system evaluates the hardness using AI, and this hardness is determined incorrectly, and subsequently disputed by the heat treater customer, then the manufacturer, the hardness tester supplier and the author of the source algorithm, including the training data, are responsible.

The customer of the heat treater, the manufacturer of parts with incorrectly measured hardness, does not have to prove anything, he received the goods with a defect, with incorrect hardness, and will demand compensation from heat treater for the defect in the goods, but also for the damages that could have been caused by this defect. From the point of view of the defect, it is completely irrelevant what device the heat treater used to measure.

However, the heat treater may try to transfer the responsibility further, i.e. to the manufacturer of the hardness measuring device, or, if he wants, the supplier of the device to share in this damage.

They must therefore prove that there is a causal correlation between the measurement with the help of AI and the incorrectly determined result. This will be difficult. But it will also depend on how the communication with the AI ​​in the device is designed, or on the training data set, etc.

Here I will expect that if the user involves AI in the hardness assessment, the AI ​​will offer him full traceability of the decision. So

  • The result of my measurement is 700-800 HV5
  • I used model photos x, y, z for the assessment
  • If you press OK, you confirm that you agree with my selection and that you take responsibility for any measurement error

If this were the case, then we have the possibility of retrospectively checking why the result measured by the AI ​​deviates from the measurement result by an experienced and trained employee, but at the same time the manufacturer of the hardness tester with AI has transferred the responsibility to the user.

If this is not the case, and the values ​​from the AI ​​are accepted by the device automatically, then a situation will arise where the device supplier bears full responsibility for the measurement result, i.e. for the defect, and at the same time for the damage. Since it may involve, for example, the recall of the entire production batch for the automotive industry, then these damages can take on unrealistic dimensions.

However, in the event of a lawsuit, it will be very difficult to prove a causal link between the incorrect hardness and the AI ​​design. The device supplier can always argue that the difference is due to incorrect use of the device, incorrect preparation of the surface for measurement or incorrect interpretation. His possible objection that the AI ​​is only an assistant here will probably be justified if this is also described in the hardness tester manual and the AI ​​description.

As a result, the heat treater will always bear the consequences of incorrect measurements, and with AI it will only have the inner feeling that it has done everything to reduce the operator’s influence on the measurement values.

However, if the hardness tester AI provides the data mentioned above before approving the result, then this data can be used in the future to improve the AI, expand the training data set, but always only after the intervention of the device manufacturer.

In case 2, i.e. with an open system, the position of the heat treater will be even worse. Since it has entered its data and knowledge into the AI ​​system and training data, there will be practically zero possibility of blaming the hardness tester supplier for incorrect device measurements.

Further interpretation by Tomáš Sokol resulted in:

Objective liability (also liability) is in law liability for damage that has occurred, regardless of the fault of the person who caused the damage. It is indifferent whether it is an operation or a product.

Therefore, it will be necessary to consider other sections of the Civil Code in connection with the aforementioned hardness meter.

§ 2939

(1) Damage caused by a defect in a movable property intended to be placed on the market as a product for the purpose of sale, rental or other use shall be compensated by the person who manufactured, extracted, cultivated or otherwise acquired the product or its component, and jointly and severally with him, the person who marked the product or its component with his name, trademark or in some other way.

(2) Jointly and severally with the persons referred to in paragraph 1, the person who imported the product for the purpose of placing it on the market within the framework of his business shall also compensate for the damage.)

§ 2941

A product is defective within the meaning of Section 2939 if it is not as safe as can reasonably be expected of it, taking into account all the circumstances, in particular the manner in which the product is placed on the market or offered, the intended purpose for which the product is intended to serve, as well as the time at which the product was placed on the market

§ 2942

The person who caused the damage shall be released from the obligation to compensate for the damage caused by a defect in the product only if he proves that the damage was caused by the injured party or by the person for whose act the injured party is liable.

(2) This person shall also be released from the obligation to compensate for the damage if he proves that he did not place the product on the market,

a) it can be reasonably assumed, considering all the circumstances, that the defect did not exist at the time the product was placed on the market or that it occurred later,

d) the state of scientific and technical knowledge at the time he placed the product on the market did not allow the defect to be detected.

The new European regulation extends the concept of “product” to software and AI.

What does this mean? Buying anything with AI will also require a certain level of legal vision. With objective liability, we must be aware that AI installed on a product can cause defects and damage. When this can be done, it is in § 2942.

With subjective liability, criminal law does not allow the liability of either AI or robots. Without demonstrable human fault, even serious harm caused by AI cannot be punished. So if a robot throws a hammer at us and it injures us, neither the AI ​​nor the robot will be judged, but we, who bought, installed and used the robot with these properties. And if an error is found in the source code, algorithm or training data, then this liability can be transferred to the manufacturer of the AI ​​or robot.

 

Jiří Stanislav

30. května 2025

 

 

 

 

 

 

Related posts

June 10, 2025

Grain size – is it a problem?


Read more
June 1, 2025

3D printing of implants and legal liability


Read more
May 2, 2025

Legal aspects of AI and robotics


Read more

Jiří Stanislav, Ing., CSc.

Consultant for heat treatment of metals

Forensic expert in metallurgy and heat treatment of metals

IČ: 02232413

Elišky Krásnohorské 965
Liberec 14, 46001 Česká Republika

Stanislav.jirka@gmail.com

+420 603 235 924

Information

  • General terms and conditions of sale of courses

Contact

Stanislav.jirka@gmail.com

+420 603 235 924

© 2021 tvorbu webu realizoval SEMTIX.cz
    0English
    • Czech
    • English
      Spravovat souhlas s cookies
      Abychom poskytli co nejlepší služby, používáme k ukládání a/nebo přístupu k informacím o zařízení, technologie jako jsou soubory cookies. Souhlas s těmito technologiemi nám umožní zpracovávat údaje, jako je chování při procházení nebo jedinečná ID na tomto webu. Nesouhlas nebo odvolání souhlasu může nepříznivě ovlivnit určité vlastnosti a funkce.
      Funkční Always active
      The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
      Předvolby
      The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
      Statistické
      The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
      Marketing
      The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
      Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
      Upravit
      {title} {title} {title}