Introduction
Artificial Intelligence (AI) is way older than you might think. In fact, the term itself was coined in 1956!
Source: https://www.youtube.com/watch?v=056v4OxKwlI
Early History (1952–1956)
AI has a long history, dating back to myths, stories and rumours about master craftsmen endowing artificial beings with intelligence or consciousness. Classical philosophers attempted to describe human thought as the mechanical manipulation of symbols, which sowed the seeds of modern AI.
Charles Babbage (1791 - 1871), an English mathematician and computer pioneer, devised the Analytical Engine - a mechanical general-purpose computer. The Analytical Engine had an arithmetic logic unit, conditional branching and integrated memory, making it the first Turing-complete design for a general-purpose computer. As a result, the programmable digital computer, based on the abstract essence of mathematical reasoning, was invented in the 1940s.
In the 1940s and 1950s, a group of scientists from various fields (mathematics, psychology, engineering, economics, and political science) began to consider the possibility of developing an artificial brain. In 1956, AI research was established as a separate academic discipline.
Early neural networks and cybernetics
Popular ideas in the late 1930s, 1940s, and early 1950s inspired the first research into thinking machines. According to recent neurology research, the brain is an electrical network of neurons that fire in all-or-nothing pulses. Norbert Wiener's cybernetics described control and stability in electrical networks. The information theory of Claude Shannon described digital signals (i.e., all-or-nothing signals). Finally, Alan Turing's computation theory demonstrated that any type of computation could be digitally described. The close relationship between these concepts suggested that an electronic brain could be built. In 1943, Walter Pitts and Warren McCulloch looked at networks of idealised artificial neurons and demonstrated how they could perform simple logical functions. They were the first to describe a neural network. A young Marvin Minsky, then a 24-year-old graduate student, was one of the students inspired by Pitts and McCulloch. He co-created Dean Edmond with the first neural net machine, the SNARC, in 1951 with. Minsky would be one of the most influential leaders and innovators in AI for the next five decades.
Source: https://images.app.goo.gl/tfd5GSP2Th94ABRM8
Artificial Intelligence in Games
Christopher Strachey made a checkers program and Dietrich Prinz wrote a chess program using the University of Manchester's Ferranti Mark 1 machine in 1951. Arthur Samuel's checkers program, which he developed in the 1950s and early 1960s, eventually advanced to the point where it could challenge a competent amateur. Game AI has been used as a barometer of AI progress throughout its history.
Dartmouth Workshop 1956: the birth of AI
John McCarthy, Marvin Minsky, and two senior scientists from IBM, Claude Shannon and Nathan Rochester, organised the Dartmouth Workshop in 1956. According to the conference proposal, ‘Every part of a learning or any other characteristics of intelligence can be so accurately elaborated that a machine can be created to simulate it’. The 1956 Dartmouth Workshop is widely regarded as the beginning of AI, as it established its name, mission, first success and major players. McCarthy chose the term ‘Artificial Intelligence’ to avoid connotations with cybernetics and ties to influential cyberneticist Norbert Wiener.Following the Dartmouth Workshop (1956–1974)
Most researchers found the programs developed in the years following the Dartmouth Workshop ‘astonishing’ - computers successfully solved algebra problems, proved geometric theorems and learned English. Few people would have believed that machines could be capable of such ‘intelligent behaviour’ at the time. Researchers expressed extreme optimism in private and in print, predicting that a brilliant machine would be built in less than 20 years.
In the late 1950s and 1960s, there were numerous successful programs and new directions. Many early AI programs used the same basic algorithm. They proceeded step by step towards a goal (like winning a game or proving a theorem), as if going around a maze, backtracking whenever they met a dead end. The ‘reasoning as search’ paradigm was coined to describe this approach. The main problem with this approach was that the number of possible paths through the ‘maze’ was simply astronomical for many problems. The search space would be reduced by using heuristics, or ‘rules of thumb’, eliminating paths that were not likely to lead to any solution.
In 1972, the world's first full-scale intelligent humanoid robot, the WABOT-1, was unveiled. Its limb control system used touch sensors to allow it to walk with its lower limbs and hold and transfer objects with its hands. Its vision system allowed it to use external senses, artificial eyes, and ears to measure distances and directions to things.
The First AI Winter (1974–1980)
In the 1970s, AI was heavily criticised and suffered financial failures. The complexity of the issues confronted by AI researchers was underestimated. Their constant optimism had instilled excessive expectations, and AI funding was slashed when the promised outcomes did not materialise. Simultaneously, Marvin Minsky's withering critique of perceptrons (grandparent to the units which compose modern neural networks), effectively put an end to connectionism for the following ten years. In the late 1970s, despite the public's poor opinion of AI, new ideas in logic programming, commonsense reasoning, and other domains were investigated.
The Boom (1980–1987)
Knowledge became the target of mainstream AI research in the 1980s, when a type of AI model, known as expert systems, was embraced by organisations all over the world. An expert system can be defined as an AI-powered system that mimics the decision-making intelligence of a human expert. This is accomplished by using reasoning and rules to derive knowledge from its knowledge base in response to user inquiries. Expert systems are designed to solve complicated issues by reasoning through bodies of knowledge, which are expressed in particular as "if-then" rules rather than the usual agenda to code. Expert systems are known for being exceptionally responsive, dependable, intelligible and capable of excellent execution.
During the same time period, the Japanese Government invested heavily in AI. The return of connectionism in the work of John Hopfield and David Rumelhart in the early 1980s was another positive development.
The Second AI Winter (1987–1993)
The business community's interest in AI had faded in the 1980s, following the usual pattern of an economic bubble. The failure of commercial vendors to produce a wide range of workable solutions caused the collapse. As a result of the failure, it was assumed that the technology was not practical. Expectations were substantially higher than what was actually possible, as they had been with earlier AI programs. By the end of 1993, over 300 AI companies had shut down, gone bankrupt, or been purchased, ultimately putting an end to the first commercial wave of AI. However, despite the criticism, the field continued to progress. Several experts, notably robotics innovators Rodney Brooks and Hans Moravec, were arguing for a whole different approach to AI.
The start of the 21st Century
The AI field, which is already more than half a century old, eventually achieved some of its most treasured goals between 1993 and 2011. It started to be successfully used in the IT business, albeit in the background. This was mainly achieved by focusing on specific isolated problems and pursuing them with high levels of scientific accountability standards. Even still, AI's reputation was not quite rosy, at least in the corporate sector. There was no agreement within the field on the causes behind AI's inability to realise the dream of human-level intelligence that had caught the world's imagination in the 1960s. All of these causes contributed to the fragmentation of AI into subfields, focusing on specific problems or methodologies, sometimes even under new titles that obscured the tainted ancestry of AI.
Achievements
A number of achievements, based on the laborious application of engineering talent and the massive growth in computer speed and capacity by the 1990s, rather than some revolutionary new paradigm, were registered at the start of the noughties.
Chess-Playing System
In 1997, Deep Blue successfully became the first computer chess-playing system to defeat Garry Kasparov, the reigning world chess champion. The supercomputer was a customised version of an IBM framework that could process twice as many moves per second as it could during the first match (which Deep Blue had lost). The event was live on the internet, and approximately 74 million people watched it.
Autonomous Vehicles
In 2005, a Stanford University robot won the DARPA Grand Challenge (prize competition for autonomous vehicles) by driving independently for 131 miles along a desert track that had never been travelled before. Two years later, a Carnegie Mellon University team won the DARPA Urban Challenge by autonomously navigating 55 miles in an urban area, while following all traffic laws and dangers.
Winning Quiz Shows
In 2011, IBM's question-answering system called Watson, named after the company’s founder Thomas J Watson, won against two champions, Brad Rutter and Ken Jennings,during an exhibition match on a famous quiz show called Jeopardy!
From 2011 Onwards
Access to vast volumes of data (known as ‘big data’), cheaper and faster computers, and improved machine learning techniques are effectively being applied to various problems throughout the economy in the first decades of the 21st century. The McKinsey Global Institute report, entitled ‘Big data: The new frontier for innovation, competition, and productivity’, anticipates that “by 2009, practically all sectors in the US economy were supposed to have at least an average of 200 terabytes of stored data”. According to the New York Times, the market for AI-related devices, hardware, and software had expanded to more than $8 billion by 2016, and interest in AI had reached a ‘frenzy’ level. Big data uses began to spread to other industries, including ecology training models and various economic applications. Advances in deep learning have benefitted image and video processing, text analysis, and even speech recognition (especially deep convolutional neural networks and recurrent neural networks).