Artificial Intelligence
This page contains general/introductory information about AI.
See also:
- Intelligent Agents
- Basic Problem Solving
- Knowledge Representation
- Agent Learning
- Constraint Satisfaction
Artificial Intelligence (AI) was originally coined in 1956 to describe "machines capable of performing tasks which have characteristics resembling human intelligence".
As of 2021, AI now describes theory and development of overall systems which "can perform tasks that normally require human intelligence".
A few examples are:
- Visual Perception
- Speech Recognition
- Decision Making
- Language Translation
Terminology
- Agent - An entity that can act. Generally does so to achieve the "best possible" outcome.
- Intelligent Agent - An agent that takes input from the environment in some manner.
- Expert Systems - Systems that generally require some kind of "domain knowledge". They must be explicitly told what do to before they can function.
- Machine Learning - Systems that learn to solve problems by itself, over time. Generally requires a period of training before it can start.
- Neural Networks - Systems that attempt to mimic the human brain by having "nodes". These nodes are mathematically weighted, which controls what triggers on given input.
- Deep Learning - A sub-type of neural networks which specifically have many layers of nodes.
- Weak AI - AI that does only the specific, repetitive tasks it was programmed to do.
- Strong AI - AI that has developed to learn tasks outside of what it was programmed to do.
Related Fields
By now, almost every field you can think of has some application of AI either being used, or actively developed.
But some of the fields that originally contributed to the foundations of AI are:
- Philosophy - Examination of logic, reasoning, and the mind as a physical system.
- Mathematics - Introduction of logarithms, computers, formal proofs.
- Psychology - Study of adaptation, perception, motor control.
- Economics - Theory of actors/individuals making rational decisions. on both micro and macro scale.
- Linguistics - Determining complicated knowledge representations.
AI Timeline
Condensed History of Major Events in AI
- 1943 - McCulloch and Pitts - Introduced the concept of Neural Networks.
- 1949-1950 - Hebb and Minsky - Improved methods of building Neural Network weights.
- 1956 - Dartmouth Conference - Coined the term "Artifical Intelligence". Created major programs that would build the foundation for the first decades of AI research.
- 1952-1969 - Early stages of AI growth.
- 1969-1986 - Expert systems came about from domain specific knowledge. Such as early medical systems designed to help diagnose patients.
- 1986 - Probabilistic reasoning and machine learning was introduced. Prior to this, it was difficult to represent uncertainties and probabilities.
- 2001-Present - Internet of Things (IoT). Also, the "big data" boom, in large part due to easy accessibility to "the cloud".
- 2011-Present - Deep learning boom. Processing of complex subjects became more computational feasible, and we already had the "big data" storage to supplement it.
Present State of AI
The present state of AI (as of 2021) can be described in three major "waves" of development:
- AI that Describes - Expert systems and other handcrafted knowledge systems.
- AI that Categorizes - Machine learning and deep learning.
- AI that Explains - Contextual adaptation. AI is just starting to do this, but it's still in experimental stages.
Tentative Future of AI
The expected future of AI (as of 2021) can be described in by the following stages:
- Artificial Narrow Intelligence (ANI) - AI that performs specific, narrow tasks. We are here.
- Artificial General Intelligence (AGI) - AI that performs broad tasks. More importantly, can self-improve.
- Generally has abilities roughly comparable to general human capabilities.
- This may lead to AI that starts competing for human jobs.
- Artificial Super Intelligence (ASI) - AI that demonstrates high levels of intelligence and growth.
- Generally has abilities that far exceed human capabilities.
- May end up being either very beneficial, or very detrimental to human society.
Turing Tests
The "turing test" is one of the first tests for classifying something as an AI.
In short, a system passes this test if it can behave in such a manner that its behavior is indistinguishable from a human's behavior.
The original test goes as follows:
- Test has a single human "interrogator" in one room. This interrogator has access to a terminal.
- Then in a different room, one or more other humans participate and have access to a separate terminal.
- At least one AI participant is also present, and communicates through the terminals as well.
- The goal is for the interrogator to determine which other participants are AI, and which ones are humans.
The point of all this is to answer the questions such as:
- "Can machines think?"
- "Can computers imitate humans?"
- "Are computers able to fool humans?"
While this is neat idea, this base concept isn't very compatible with mathematical analysis.
In other words, it is very hard to objectively analyze or quantify in any meaningful way.
General Types of AI
Generally speaking, AI can focus on either thinking or acting. And they try to do so either "rationally", or "like humans".
Note that it's often far less useful to make an AI that "thinks" in the desired way but takes no corresponding action.
Thus discussion tends towards AI that takes action in some manner.
Note that, as described below, both AI that "behaves humanly" and AI that "behaves rationally" have a place.
Which one is "better" depends on what the desired application is. But a majority of modern AI usage tends towards rational ones.
Humanistic AI Systems
Entire schools of thought have stemmed from researching how to make AI "like humans".
For example, Cognitive Science and Cognitive Neuroscience have both been created to learn how humans behave.
In particular, these types of AI have been used to try to emulate emotions, creation of types of art (such as generating music), and otherwise dynamically respond to humans.
However, note that humans have positive and negative emotions. As well as irrational emotions and behaviors.
AI typically don't benefit from this, and it can be a very complex matter to explore.
Rational AI Systems
"Rational behavior" is generally associated with logical "correct/false" thinking. IE: "This task was done correctly."
The problem with attributing everything to rationality is that not everything can be associated with "right and wrong".
Some ideas and concepts have a degree of uncertainty. While others are simply not easily expressible in objective "this is correct" analysis.
Note that a "rational system" can still be rational, even when it makes "incorrect" decisions.
A system is still considered rational as long as it makes "the most rational choice possible" at every given moment, using the information it had available.
The problem comes in when an AI system does not have full knowledge at a given moment, due to the environment or other factors. See Intelligent Agents for further details.