On Tuesday I attended an interesting lecture in Prague on the topic of AI and criminal liability.
As AI becomes more and more prevalent around us, few people look at this issue from the perspective of liability for the damage that AI, robots, or autonomous cars can cause us. And since I am still a forensic expert, I took it as an opportunity for self-education.
But what are we talking about when we say AI? It’s strange, the name AI itself hides something mysterious, unknown, but it’s our friend today. AI as a term is coming at us from all sides, from morning to night, there’s basically nowhere to turn. A refrigerator with AI, a chainsaw with AI, autonomous driving with AI, maybe even flushing the toilet has it today ……..It’s all the fashion. But what it really represents, few people know or think about.
According to Professor Smejkal, AI is divided into two basic categories:
But sometimes AI can also influence the world as part of other systems, such as SCADA (Supervisory Control and Data Acquisition), DCS (Distribution Control System), BMS (Building Management System) or IoT (Internet of things) control systems. And here we are somewhere else again, because we are discovering that if our furnaces have SCADA, they are actually quite intelligent.
But that’s not true. Not even for any of the above items, because it’s not actually AI, but a program that someone had to write, which ensures that the binary states of our device will do what we need and are treated in such a way that no conflicts occur, if possible.
As Professor Smejkal says, systems and computers still operate according to the Neumann scheme from 1945. The principle is that the system is controlled by a program written by a programmer, and regardless of complexity, it works on the if – then – else principle. Neither different schemes nor quantum computers have changed this.
The existence of a program controlling the computer’s work step by step has been preserved. All operating functions and states of the machine are so firmly defined that, although the program can operate very variably, it can never get into a state other than the one predicted.
AI has its own development, as has Industry 4.0, which can be seen from the following categorization, describing systems without and with AI.
What we know from our surroundings today are systems with AI of the second generation at most. What does this mean? They learn nothing and are programmed to do everything they are designed to do, as safely as possible.
For example, the autopilot in my car cannot learn to drive better. If I think I drive better, I turn it off because it will never do what I can do. And there is no God who can change its behaviour because it is a closed system, inaccessible to the user.
The pinnacle of AI today can only be the second category, where, for example, a washing machine will consider the amount of laundry I put in it and based on this parameter, adjust the program so that the result is washed laundry. Nothing more. Everything else is just our imagination, which gives the term AI the significance with which it is used today.
According to Professor Mařík, AI today is just a collection of Turing mathematical models and algorithms, understanding input and output deterministically. If a program referred to as AI must follow the programmer’s commands without the possibility of its own modification, then we can develop it to the level of an expert system, but not further. So, we are still talking about the second generation of AI.
Higher types of AI can learn, i.e. automatically modify their code based on data, whether provided by a human or collected from the AI‘s environment. We are talking about machine learning, which occurs autonomously in the sense that it occurs without direct commands or rules set by a programmer.
However, this leads to states of the control system that we cannot define in advance, and it will be difficult to deduce who is responsible for achieving them, if they have a negative impact on the environment or on the robot itself.
The moment non-deterministic, human-uninfluenced, or even uninfluenced elements enter the system, the negative course of the creation and operation of an AI or robot system will be a fundamental legal problem of criminal liability for actions. [1]
What follows from this? So, if someone decides to launch a third-generation self-learning AI on the market, they must guarantee that the robot with this AI will not turn into a murderer.
Hence this topic, criminal liability when working with AI.
I’ll end this here for today. It seems to me to be a very relevant topic for the future of heat treatment shops, because there are fewer and fewer of us, and robotization and automation are the only ways forward. But we must make sure in advance that such a robotic loader with AI does not run us over in the plant, and if it does, we have to be able to deal with the legal consequences.
But more about that next time.
[1] Smejkal, V,, Sokol. T, Trestněprávní aspekty robotiky, Právní rozhledy, XXVI, 2018, č. 15-16, s. 530-540
(1) Note – the text in italics is a citation from a lecture by Professor Smejkal
Jiří Stanislav
February 6, 2025