Artificial Intelligence/Agent Learning

From Dev Wiki
< Artificial Intelligence
Revision as of 16:54, 16 December 2021 by Brodriguez (talk | contribs) (Create page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

An agent's ability to learn is essential for unknown environments. Learning exposes the agent to reality, rather than a human attempting to define and write the reality down.

Learning is meant to modify an agent's behavior, such that it improves performance. However, implementation depends on:

  • What is already known by the agent/KB.
  • How performance is evaluated.
  • What function is to be adjusted by learning.
  • What kind of feedback is available to learn from.


Terminology

  • Inductive Learning - Uses a basic function or rule which maps input to output.
    • This tends to be one of the simplest learning implementations.
  • Supervised Learning - Has a dataset of the "correct" output. Trains on this dataset, trying to learn what the correct output is.
  • Unsupervised Learning - Learns the "correct" output over time without being given any answers.
  • Reinforcement Learning - Learns from a series of rewards and punishments.
  • Classification - Given a set of inputs, the AI chooses from a finite set of possible outputs, aka "prediction labels".
  • Binary Classification - Labels are one of two classes.
    • Ex: "Email is either spam or not spam."
  • Single-Label Multiclass Classification - AI chooses one label from a set of multiple labels. Most common example is trying to label an image.
    • Ex: "Image is either a cat, a dog, a car, or a boat."
  • Multi-Label Multiclass - The most complicated classification type. Similar to the above Single-Label classification, except can choose multiple labels for one set of input values.


Learning Approaches

Below are a handful of common approaches to AI learning.


No method is necessarily better than others, as they all have various forms of benefits, trade-offs, and various in "difficulty to implement".

It really depends on the context of the AI, and what the ultimate end-goal is.

Probabilistic Models

A learning approach that uses statistics and probability to train.

This is one of the earliest forms of machine learning, and is still widely used today.

Many principles used for this are fairly old, and actually predate the age of digital computers. Ex: "Logistic Regression" and "Naive Bayes".


Tends to assume all input features are independent from each other. That is, one feature has no impact or correlation on another.

Kernel Methods

Solves problems by plotting data points along a theoretical surface or hyperplane, then attempts to create a boundary that neatly separates the data into two distinct sets.

When new data points are added, their value is determined by which side of this boundary line they fall onto.

Decision Trees

Flowchart-like structures, which organize data by effectively using a series of nested if-else statements.

Tends to be very easy to visualize and interpret.

Neural Networks

By far one of the most complex forms of learning, but can also be one of the most robust and accurate, particularly for large, complex datasets and problems.

See Neural Networks.