Artificial Intelligence
Artificial Intelligence (AI) was originally coined in 1956 to describe "machines capable of performing tasks which have characteristics resembling human intelligence".
As of 2021, AI now describes theory and development of overall systems which "can perform tasks that normally require human intelligence".
A few examples are:
- Visual Perception
- Speech Recognition
- Decision Making
- Language Translation
Terminology
- Agent - An entity that can act. Generally does so to achieve the "best possible" outcome.
- Intelligent Agent - An agent that takes input from the environment in some manner.
Related Fields
By now, almost every field you can think of has some application of AI either being used, or actively developed.
But some of the fields that originally contributed to the foundations of AI are:
- Philosophy - Examination of logic, reasoning, and the mind as a physical system.
- Mathematics - Introduction of lgorithms, computers, formal proofs.
- Psychology - Study of adaptation, perception, motor control.
- Economics - Theory of actors/individuals making rational decisions. on both micro and macro scale.
- Linguistics - Determining complicated knowledge representations.
Condensed History of Major Events in AI
- 1943 - McCulloch and Pitts - Introduced the concept of Neural Networks.
- 1949-1950 - Hebb and Minsky - Improved methods of building Neural Network weights.
- 1956 - Dartmouth Conference - Coined the term "Artifical Intelligence". Created major programs that would build the foundation for the first decades of AI research.
- 1952-1969 - Early stages of AI growth.
- 1969-1986 - Expert systems came about from domain specific knowledge. Such as early medical systems designed to help diagnose patients.
- 1986 - Probabilistic reasoning and machine learning was introduced. Prior to this, it was difficult to represent uncertainties and probabilities.
- 2001-Present - Internet of Things (IoT). Also, the "big data" boom, in large part due to easy accessibility to "the cloud".
- 2011-Present - Deep learning boom. Processing of complex subjects became more computational feasible, and we already had the "big data" storage to supplement it.
Turing Tests
The "turing test" is one of the first tests for classifying something as an AI.
In short, a system passes this test if it can behave in such a manner that its behavior is indistinguishable from a human's behavior.
The original test goes as follows:
- Test has a single human "interrogator" in one room. This interrogator has access to a terminal.
- Then in a different room, one or more other humans participate and have access to a separate terminal.
- At least one AI participant is also present, and communicates through the terminals as well.
- The goal is for the interrogator to determine which other participants are AI, and which ones are humans.
The point of all this is to answer the questions such as:
- "Can machines think?"
- "Can computers imitate humans?"
- "Are computers able to fool humans?"
While this is neat idea, this base concept isn't very compatible with mathematical analysis.
In other words, it is very hard to objectively analyze or quantify in any meaningful way.
General Types of AI
Generally speaking, AI can focus on either thinking or acting. And they try to do so either "rationally", or "like humans".
Note that it's often far less useful to make an AI that "thinks" in the desired way but takes no corresponding action.
Thus discussion tends towards AI that takes action in some manner.
Humanistic AI Systems
Entire schools of thought have stemmed from researching how to make AI "like humans".
For example, Cognitive Science and Cognitive Neuroscience have both been created to learn how humans behave.
In particular, these types of AI have been used to try to emulate emotions, creation of types of art (such as generating music), and otherwise dynamically respond to humans.
Rational AI Systems
"Rational behavior" is generally associated with logical "correct/false" thinking. IE: "This task was done correctly."
The problem with attributing everything to rationality is that not everything can be associated with "right and wrong".
Some ideas and concepts have a degree of uncertainty. While others are simply not easily expressible in objective "this is correct" analysis.
Note that a "rational system" can still be rational, even when it makes "incorrect" decisions.
A system is still considered rational as long as it makes "the most rational choice possible" at every given moment, using the information it had available.
The problem comes in when an AI system does not have full knowledge at a given moment, due to the environment or other factors. See below section for further details.