TDT4136 - Introduction to Artificial intelligence

Modified:
Created:

[TOC]

Intro

These are my notes for TDT4136. These are my own understandings of the subject and it may not be completely correct or accurate. It is also written in slightly below average Norwegian-English.

Why do I post them here?

There are a couple of reasons. The first is for convenience. I like to write Markdown as it is very easy to structure text easily and it does not take any focus away from the text, it also has great readability both uncompiled and compiled. It is also very practical to have this available on the web as I like to read my notes on my iPad, and it is faster to access here. The last reason is that someone may find them useful.

Search

All of the following are traditionally classified as Informed Search methods: A* , Iterative deepening and Hill climbing NO!

State space

In searching, we can look at the state space as a graph where each node is a state in the world and every edge is an action.

The components in a state space is:

  • Initial state
  • Possible actions
  • Transition model
  • Goal state

Uninformed search

Method has access only to the problem definitions

  • BFS
  • Uniform-cost search
  • DFS
  • Iteratic deepening search
  • Bidirectional search

Informed search

Methods that may have access to a heuristic function that can estimate the cost of a solution from n.

  • Generic best-first search
  • Greedy best-first search
  • A* search
  • Recursive best-first search
  • Simplified memory-bound A* (memory efficient)

Depth first search

Gready best-first search using h(n) = -depth(n)

Admissible heuristic

A heuristic is admissible if it is equal or lower than the actual length to the goal. It must not be an overestimate.

Consistent heuristic

The cost of moving from a node to another is higher than the reduction in heuristic. For example: Node A(20) can go to node B(3) for the cost of 16. h(A) > cost(A,B) + h(B).

Genetic algorithm

The basic Genetic algorithm is stochastic beam search supplemented with a crossover operator.

A*

A* is a best first search algorithm that uses an admissible heuristic estimate to choose the best possible node for every iteration.

Environments

Fully observable

The sensors give the again access to the complete state of the environment at each point in time.

The agent will not need to maintain any internal state to keep track of the world

Partially observable

The sensors are inaccurate or noisy, the state of the world is more unclear.

Agent must make informed guesses about the world.

Deterministic

The next state is only dependent on the current state and the agents action.

Any action will have single guaranteed effect. There is no uncertainty.

Strategic

Strategic is a subset of a Deterministic environment, this is because the only uncertainty is the actions of other agents.

Stochastic

There is some uncertainty about the outcome of an agent action.

It is harder to create an agent for non-deterministic environments.

Episodic

Every action of an agent is an atomic episode. Each episode consists of the agents calculation of the next action and the action.

The action in one episode is not dependent on any other episode than the current.

Sequential

In a sequential environment, all the current decisions can affect the future decisions. This is much harder than episodic, because we do not need to think ahead.

Discrete

There is a finite number of states and actions. For example chess.

Continuous

Everything is live, like a taxi driver.

Static

You can assume that the environment does not change between agent actions. Time is not a thing the agent need to care about.

Dynamic

Dynamic environments can change while the agent is thinking, this requires quick decision making from the agent. Changes can be done that the agent can't control.

Known

The outcomes for all actions are given. Agents know all the rules like gravity, but the environment can be partially observable if the sensors are not properly working.

Unknown

The agent knows nothing, it will have to learn all the rules by itself.

Agents

  • Single: Sudoku, Crossword
  • Multi: Chess, Go, Ludo

All the agents can be turned into learning agents.

Simple Reflex agents

This agent is the most basic of them all, it will only do checks in its current state and don't care about nothing else.

The whole agent function is only based on condition-action rule, if condition then action.

This agent can only succeed when the environment is fully observable. In a partially observable environment, it will plummet into infinite loops most of the time.

Model/state-based reflex agents

This agent can handle a partially observable environment. The current state of the world is stored so it can maintain some kind of description of the world which it cannot see. This can be looked at as a model of "how the world works". This information if maintained between actions and updated.

The agent function chooses an action dependent on the current and some past information.

Goal-based agents

The goal-based agents further improve the model-based agents by adding goal information. Goal information describes situations which are good. This allows the agent to choose a goal state instead of a non-goal state if there are multiple available possibilities.

Utility-based agents

This agent further extends the goal-based agent by adding a utility function which can evaluate how good a state is. This makes it possible to always choose the best state we possibly can. It is often referred to as evaluating how happy we will be in the next state.

Learning agents

All the agents above can be implemented as learning agents. A Learning agent can use information it gathers in an environment. This allows them to initially operate in an unknown environment.

A learning agent has four elements, Critic, Learning element, Problem generator, and Performance element. The learning element is responsible for making improvements and the performance element is responsible for making the actions. The learning element uses the critic to determine how the performance element should be modified to do better in the future. The performance element is what we in previous agents looked at as the whole function. The last element, the problem generator is responsible for suggestion actions that will lead to new and informative experiences.

Logic

Is the following valid? {A → {B → C}} → {A → C} Why is this not valid?

The general rule is, ∀ goes with ⇒, ∃ goes with ∧. The questions in the next section show why.

In general, ∀x P(x) and ¬∃x ¬P(x) are equivalent. But one is shorter to write.

In general, ∃x P(x) and ¬∀x ¬P(x) are equivalent. But one is shorter to write.

Valid: all is true

Satisfiable:, at least, one is true

Entailment: one follows another Valid if and only if a -> b

Soundness An argument is sound if and only if the argument is valid and all its premises are true.

Valid example:

p1: All men are mortal.
p2: Socrates is a man.
a:  Socrates is mortal.

Invalid example:

p1: P => Q
p2: Q
a:  P

Since the argument actually is False when both permisses are True, the argument is not sound.

Knowledge base

Truth table

$P$ $Q$ $\lnot P$ $P \land Q$ $P \lor Q$ $P \Rightarrow Q$ $ P \models Q$ $P \Leftrightarrow Q$
0 0 1 0 0 1 1 1
0 1 1 0 1 1 1 0
1 0 0 0 1 0 0 0
1 1 0 1 1 1 1 1

Logical equivalence

Statement 1 Statement 2 Name
$(a \land b)$ $(b \land a)$ commutativity
$(a \lor b)$ $(b \lor a)$ commutativity
$((a \land b) \land c)$ $(a \land (b \land c))$ associativity
$((a \lor b) \lor c)$ $(a \lor (b \lor c))$ assiciativity
$\lnot(\lnot a)$ $a$ double-negation
$(a \Rightarrow b)$ $(\lnot b \Rightarrow \lnot a)$ contraposition
$(a \Rightarrow b)$ $(\lnot a \lor b)$ implication
$(a \Leftrightarrow b)$ $(( a \Rightarrow b) \land (b \Rightarrow a))$ biconditional
$\lnot (a \land b)$ $(\lnot a \lor \lnot b)$ De Morgan
$\lnot (a \lor b)$ $(\lnot a \land \lnot b)$ De Morgan
$(a \land (b \lor c))$ $((a \land b) \lor (a \land c))$ distributivityof ∧ over ∨
$(a \lor (b \land c))$ $((a \lor b) \land (a \lor c))$ distributivity of ∨ over ∧
$\lnot (\forall x p )$ $\exists x \lnot P$ -
$\lnot (\exists x p )$ $\forall x \lnot P$ -

Inference

Inference is the process of arriving at a logical conclusion based on given premises that you know or assumes to be true.

Example:

  1. All men are mortal
  2. Socrates is a man
  3. Therefore, Socrates is mortal

Example of wrong inference:

  1. All apples are fruit.
  2. Bananas are fruit.
  3. Therefore, bananas are apples

Inference Engine

An inference engine is a tool that can be used to expand the knowledge base of an expert system. This is done by trying to expand information from facts given by forward and backward chaining the given fact. The new information that gets available will be added to the knowledge base and used in later queries against the inference engine.

Chaining

Forward chaining

Forward chaining is an inference method that describes how to use the data we know to extract more data about an element by testing inference rules.

The inference example above describes with forward chaining that Socrates is mortal based on the rules defined in line 1. If x, man, then x is mortal.

We can say that forward chainging looks at the if clause in an inference if/then statement.

Works best in rulebased system with complex consequet structures.

Backward chaining

Backward chaining on the other hand is the process of working your way from the goal back to the information that is connected. We can say that backward chaining looks at the then clause in a inference if/then statement.

For example in this statement: If X croaks and eats flies – Then X is a frog We will have the information "is a frog" and would like to know the information "croaks and eats flies"

One and two here are clearly right and the third one is wrong.

In AI inference is often used to extend a knowledge base.

Random

Frame-based representation is mainly a declarative representation.

Extrinsic, olive oil has weight as a property.

Intrinsic, olive oil has density as a property.

PEAS (Performance, Environment, Actuators, Sensors)

Turing test

Must haves:

  • Knowledge representation