Introduction
The origins of Artificial Intelligence (AI) can be traced to and are underpinned by various disciplines, such as Philosophy, Mathematics, Computation, Cognitive Science and Neuroscience. Additionally, there is invariably a lot of overlap between these disciplines, such as in the case of Philosophy and Logic and/or Mathematics and Computation.
In turn, we can better understand the context and origins of AI by looking at the historical underpinnings of these individual disciplines and how these have played a role in shaping what we today consider as AI.
Source: https://www.youtube.com/watch?v=-qnMLjxJZm0
Philosophy
Socrates, in 400 BC, searched for an algorithm to distinguish between piety (what is pleasing in the eyes of gods) and non-piety.
In 350 BC, Aristotle formulated different styles of deductive reasoning, which could mechanically generate conclusions from initial premises, because he believed that the study of thought itself was the foundation of all knowledge. Although it took another two thousand years for the formal axiomatisation of reasoning to fully blossom in the works of Gottlob Frege, Bertrand Russell, Kurt Gödel, Alan Turing, Alfred Tarski and others, its roots can be traced back to Aristotle.
Rene Descartes (1596–1650) is closely associated with the concept of mind-body dualism, which suggests that mental events are non-physical and that the mind and body are independent and detachable.
Wilhelm Leibnitz (1646–1716) was one of the first to take the materialist position, as he argued that the mind is controlled by ordinary physical processes, implying that machines could perform mental processes.
Logic and Mathematics
- In 1777, Earl Stanhope's logic demonstrator was a device that could solve syllogisms, which are logically formatted numerical problems and basic probability questions.
- George Boole (1815 – 1864) published his formal language for making logical inferences, called Boolean Algebra.
- Gottlob Frege (1848 – 1925) developed a logic which is essentially the first-order logic that is still used to represent knowledge today.
- In 1931, Kurt Gödel demonstrated that logic has limitations. His Incompleteness Theorem demonstrated that there are true statements in which truth cannot be established by any algorithm in any formal logic, powerful enough to describe the properties of natural numbers.
Computation
In 1869, The Logic Machine constructed by William Stanley Jevons could handle both Boolean Algebra and Venn Diagrams, and solve logical problems faster than humans.
John Von Neumann (1903 - 1957) and Alan Turing (1912 - 1954) were pioneers in the field of AI. Their work defined the current computer architecture and established that it is an all-purpose tool for carrying out instructions given to it. The von Neumann architecture, proposed by John von Neumann, allows for a description of computation that is independent of the computer's specific implementation. Turing first highlighted the issue of a machine's potential intelligence as he proposed a 'game of imitation' (notoriously called 'Turing Test'), where a person should be able to determine whether he/she is talking (via text) to a human or a machine. If the person cannot distinguish between the machine and the human, the machine would be considered as having passed the test.
Cognitive Science
Cognitive Science is the study of cognition, learning, and mental structures that integrates elements of Psychology, Linguistics, Philosophy and Computation. The ability to learn is an essential aspect of human intelligence. Symbolic AI is a method of teaching AI in the same way as the human brain learns. Allen Newell and Herbert A. Simon presented the classic symbolic approach to AI in 1976, defining it as the process of developing models through symbolic manipulation. It develops internal symbolic representations of its ‘reality’, as it learns to comprehend it. Symbolic AI imitates this approach by attempting to express human knowledge and behavioural rules into computer codes.
Neuroscience
Our brains consist of tens of billions of neurons connected to hundreds or thousands of others. Each neuron is a simple device for processing information. On the other hand, large networks of neurons are extremely powerful computational devices that can learn how to operate at their best. Connectionism, also known as neural networks, is a field that aims to create artificial systems based on simplified networks of simplified artificial neurons. The goal is to create powerful AI systems as well as human ability models. Much of conscious human reasoning appears to operate at a symbolic level, whereas neural networks operate at a sub-symbolic level. Artificial neural networks are thus good models of many human abilities and perform well at many simple tasks. However, there are a number of tasks where they fall short, and other approaches appear to be more promising in those areas.