Introduction
Do you think it is a right to have a meaningful job such as a taxi driver even though driverless cars are safer than human-driven cars? What about the doctor diagnosing cancer, should he/she have that right as well, even though machine learning can do a better job?
Learning objectives for ChapterBy the end of the chapter, participants will be able to:- reflect on requirements for AI systems to be ethical
- evaluate their own values
- make choices and formulate arguments relevant to this.
Technical robustness is an important part of creating an AI system that is trustworthy. This is closely linked to the principle of preventing harm. This means that AI systems must be built with a proactive approach to risks, and in a way that ensures that they do what they're supposed to do whilst minimizing unintentional and unexpected harm, and preventing unacceptable harm. Also be taken into account are changes in their operating environment, as well as the presence of other agents (human and artificial) that may interact with the system in an unfriendly way. In addition, the physical and mental well-being of humans should be considered.
Back in the 1940's, the science fiction author Isaac Asimov wrote his Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Resilience to attack and security is an important consideration. Like all software systems, AI systems should be protected from flaws that can be exploited by hackers. Hackers may try to access the data (data poisoning) on the model (model leakage), or on the infrastructure, which includes both software and hardware.
The data and the behaviour of an AI system can be changed if it is attacked, for example, in adversarial attacks. This can make the system make different decisions, or it can shut it down all together.
Systems and data can also be damaged or tainted by humans who want to harm them or by exposing the hardware to entities that are dangerous. Insufficient security procedures can lead to bad decisions or even physical harm. In order for AI systems to be considered safe, human decision-makers need to consider possible unintended uses of the AI system (e.g. dual-use applications) and how the system could be used by ill-intentioned individuals. Steps should be taken to prevent and mitigate against these risks.
https://www.youtube.com/watch?embed=no&v=BNwWRwJ7XSA&t=15
A fallback plan should be available if something goes wrong with an AI system. Actions may include switching the system from a statistical procedure to one that follows rules, or it may ask for a human operator to intervene before proceeding further with its work. It crucial that the system will do what it is designed and intended to do without hurting living things or the world around them. This includes ensuring that there are not any unintended consequences or mistakes.
In addition, processes should be created to help people understand and assess the risks of using AI systems in a wide range of applications. The level of safety measures that are needed is determined by both the level of risk posed by an AI system and how well it can do its job. When it can be foreseen that the development process or the system itself has multiple risks, it is important to develop and test safety measures early on.
Accuracy refers to an AI system's ability to make the right decisions, for example, by correctly classifying information into the relevant groups, or by making the appropriate predictions, recommendations, or assessments based on data or models. An explicit and well-formed development and evaluation process can help mitigate and correct risks.. When the system can not avoid making some incorrect predictions, it is important that it can show how likely these mistakes are to happen. In situations where the AI system has an impact on people's lives, a high level of accuracy is especially important.
In order for AI systems to work well, they need to be able to produce the same results over and over again. A reliable AI system should be able to handle many differing types of inputs and situations. Reproducibility refers to whether an AI experiment behaves the same way when it performs again under the same circumstances. Replication files can make it easier to test and reproduce behaviour when the system is used.