Introduction
Do you think it is a right to have a meaningful job such as a taxi driver even though driverless cars are safer than human-driven cars? What about the doctor diagnosing cancer, should he/she have that right as well, even though machine learning can do a better job?
Learning objectives for ChapterBy the end of the chapter, participants will be able to:- reflect on requirements for AI systems to be ethical
- evaluate their own values
- make choices and formulate arguments relevant to this.
Privacy and data governance
Privacy is closely linked to the idea of preventing harm and it is a fundamental right that AI systems can impact on. The quality and integrity of the data, the relevance of the data to the domain where the AI systems will be used, the access protocols, and how the data will be processed are important to ensuring privacy.
AI systems must protect people's privacy and data all the way through a system's lifecycle and this is referred to as "governance". When a person interacts with an AI system, information is created about them over time. This includes information provided by them when they first start using the system, as well as information that the AI system creates about them as they interact with it. AI systems may be able to determine not only what an individual likes or enjoys, but also their sexual orientation, age, gender, religious or political views, based on digital records of their behaviour. Because people need to be able to trust the data gathering process, it must be assured that the information that is collected about them will not be used to discriminate against them in any way.
https://www.youtube.com/watch?embed=no&v=1bhpWEMZ6XA
The quality of the data used in AI systems is fundamental to how well they work. When data is gathered, it may have socially-created biases, inaccuracies, errors, and mistakes that people make. This requires attention before any data set can be used. In addition, the data must be checked to make sure they are correct. Feeding malicious data, or data that is not 'cleaned up' into an AI system may make it act in a different way, especially if the system is self-learning.
There must be a lot of testing and documentation at each step, from planning to training to testing to deployment. This should also apply to AI systems that were not built in-house but were bought from an external party.
Whether someone is a user of the system or not, there should be rules about how people can access their own data. These protocols should clearly state who can see the data and when. Only people who need to see an individual's data should be able to do so. This requirement is closely linked to the idea of transparency and includes the data, the system, and the business models that make up an AI system.
Transparency
Transparency, in general terms, is the quality of being easily seen through. In terms of AI, it includes making the data, the system, and the business models associated with the system clear to all.
The AI system's decisions should be documented in the best way possible, including the data sets and processes that led to them, and the algorithms that were used. This makes it easier for people to see how the AI system made its decisions, came to its conclusions, thus making it transparent. When an AI decision is wrong, It also helps to determine why and help to avoid making the same mistake again. Similarly, traceability makes it easier to audit and explain.
Explainability is the ability to explain both the technical processes of an AI system and the human decisions that go along with them (e.g. application areas of a system). AI systems must be able to make decisions that can be explained and traced by humans. There may also be trade-offs to be made between making a system more explainable (which may make it less accurate) and making it more accurate (which may make it less explainable).
When an AI system has a big impact on people's lives, they are entitled to get an acceptable explanation of how the AI system made its decisions. Such an explanation should be timely and tailored to the level of knowledge of the individual (e.g. layperson, user, or regulator).
In addition, explanations of how an AI system affects and shapes the way an organization makes decisions, how the system was built, and why it was used should be available, hence ensuring business model transparency.
https://www.youtube.com/watch?embed=no&v=3wLqsRLvV-c&t=6
AI systems should not make users think they are humans. People have the right to know that they are interacting with an AI system. This means that AI systems must be able to be recognized as such. In addition, people should be able to choose not to have this interaction in favour of human interaction when it is necessary in order to protect their basic rights.
Other than that, the AI system's abilities and limitations should be communicated to AI practitioners or end users in a way that fits the scenario purpose. This could include informing people of how accurate the AI system is, as well as what it is unable to do.