/
Expert System Expert System

Expert System - PowerPoint Presentation

olivia-moreira
olivia-moreira . @olivia-moreira
Follow
456 views
Uploaded On 2017-05-08

Expert System - PPT Presentation

SESSION 1 Introduction to Knowledgebased Intelligent Systems By HNematzadeh What is intelligence Intelligence is the ability to think and understand instead of doing things by instinct or automatically Essential English Dictionary Collins London 1990 ID: 546214

systems expert search history expert systems history search fuzzy intelligence methods test turing knowledge generate problems game problem rules

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Expert System" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Expert System

SESSION 1

Introduction to Knowledge-based Intelligent Systems

By:

H.NematzadehSlide2

What is intelligence?

Intelligence is the ability to think and understand instead of doing things by instinct or automatically. (Essential English Dictionary, Collins, London, 1990)

 so intelligence may refer to someone or something

What does think mean?

Thinking is the activity of using your brain to consider a problem or to create an idea.Slide3

What machines can do?

Any intelligent system should have brain (or an organ like brain!) to think!

Intelligence = the ability to learn and understand, to solve problems and to make decisions. [from AI point of

veiw

]

The goal of artificial intelligence (AI) as a science is to make machines do things that would require intelligence if done by humans.Slide4

What machines can do?

Can machines think? YES and NO!

Some people are smarter in some ways than others.

As humans, we all have the ability to learn and understand, to solve problems and to make

decisions; however, our abilities are not equal and lie in different areas.Slide5

What machines can do?

if machines can think, some of them might be smarter than others in some ways!

Alan Turing was also answered the question “Can machines think?” by proposing the

Turing Imitation Game

in 1950.Slide6

Turing Imitation Game

Turing said we should ask, ‘

Can machines pass

a behaviour test for intelligence?

’ He predicted that by the year 2000, a computer could be programmed to have a conversation with a human interrogator for five minutes and would have a 30 per cent chance of deceiving the interrogator that it was a human. (But his prediction failed!)Slide7

Turing Imitation Game

The test consists of two phases:

Phase 1: Interrogator - Man – Woman

Phase 2: Interrogator - Machine – Woman

Phase 1: the man should attempt to deceive the interrogator that

he

is the woman, while the woman has to convince the interrogator that

she

is the womanSlide8

Turing Imitation gameSlide9

Turing Imitation Game

In the second phase of the game, the man is replaced by a

computer

programmed to deceive the interrogator as the man did. It would even be programmed to make mistakes and provide fuzzy answers in the way a human would. If the computer can fool the interrogator as often as the man did, we may say this computer has passed the intelligent behaviour test.Slide10

Turing Imitation GameSlide11

Turing Imitation Game

the interrogator is allowed to ask any questions, even

provocative

ones, in order to identify the machine. The interrogator may, for example, ask both the human and the machine to perform

complex mathematical calculations

, expecting that the computer will provide a correct solution and will do it faster than the human.Slide12

Turing Imitation Game

The interrogator also may attempt to discover the

emotional nature

of the human, and thus, he might ask both subjects to examine a short novel or poem or even painting. Obviously, the computer will be required here to simulate a human’s emotional understanding of the work.Slide13

Turing Imitation Game

Our brain stores the equivalent of over 10

^18

bits and can process information at the equivalent of about 10^15 bits per second. By 2020, the brain will probably be modelled by a chip the size of a sugar cube – and perhaps by then there will be a computer that can play – even win – the Turing imitation game.Slide14

History of AI-Begining

1.

Dark Ages - the birth of artificial intelligence (1943–56)

ANN by McCulloch and Pitts in 1943!

McCulloch = 2

nd

founding father of AI after Alan Turing

Their first attempt almost failed because they assumed neurons to be linear (

on and off

)

ANN declined in 1970s and revived in 1980!Slide15

History of AI

At the summer workshop at Dartmouth College in 1956 sponsored by IBM with only ten researchers the artificial intelligence term was first used!

2.

The rise of artificial intelligence (1956–late 1960s)

great ideas and very limited success!

Fortran, LISP were among deliverables

Weak methods

(general purpose search/ general methods ) were used to solve problems: BFS,DFSSlide16

History of AI

3.

Unfulfilled promises, or the impact of reality

(late 1960s–early 1970s)

A) P problems, polynomial steps to find a solution is efficient.

NP problems, NP-complete problems

 hard to solve

×

B) AI attempted to solve too difficult problems.

Eg

: Machine translationSlide17

History of AI - ES

C) In 1971, the British government also suspended support for AI research!

4.

The technology of expert systems, or the key to success (early 1970s–mid-1980s)

Researchers finally realised that the only way to deliver practical results was to solve typical cases in narrow areas of expertise by making large reasoning steps. Slide18

History of AI- DENDRAL

Previously they believed search algorithm could solve human like and general methods!!

DENDRAL

was an expert system for determining the Martian soil. (Stanford Uni - 1969)

The previous strategy was based on generate- test method: (

remember 8 queens problem

)

millions of possible structures could be generated!!! That’s too bad ×Slide19

Flash back! 8 queens problems

Answers even can be even more than this! How many?

92 distinct solution, with applying symmetry operation like rotation 12 unique fundamental solution.

×Slide20

Flash back! 8 queens problems

1- put 8 queens on the board

2- Test either the goal state is reached or not:

YES

 answer, NO  go to step 1

The search algorithm can be

systematic = trying every possible candidate for a solution

How much is the search space?

1-The systematic algorithm gives approximately 1.8×10^14 search space. Maximum states should be checked systematically are 1.8×10^14 = 64×63×…×57.

Think about n-queen problem!Slide21

8 queens

The previous method is called incremental formulation, too bad! Search space =

1.8 × 10^14

.

By applying a simple rule that constrains each queen to a single column (or row),The second search space is 8^8 =

16,777,216

The better way is called complete state formulation, the search space is

2057

, we do not create all states, just the legal one! This method is a little stronger.

For 100-queen for incremental formulation search space is 10^400 and for complete state formulation search space is 10^52.Slide22

Does it ring a bell?!

In the below figure B is the black horse and W is the white horse. We want to exchange W with B using search algorithms, which of the following search algorithms is the best?

1) A* 2) BFS 3)DFS 4) hill climbing

W

W

B

BSlide23

Don’t get shocked, try it!

At the first glance, you may think A* or hill climbing are better than BFS and DFS because they are heuristic (

strong methods

). BUT they are not the answers!

Q: sometimes weak methods are better. When?

A: when the search space is small.

Between DFS and BFS it seems that BFS has a better result, because it is complete and optimised! So, the answer is 2.Slide24

Weak methods are not practical!

What is Completeness, optimality and complexity in search algorithms?

Eventhough

weak methods are sometimes(?) better than strong methods but in real world problem (practical problem) the search space is usually big like n-queen and weak methods are not always useful.

Weak methods usually don’t have high performance(bad time and memory complexity)

We should look for domain specific and non general methods like the concept in expert systemsSlide25

Generate and Test Procedure

Is generate and test systematic or heuristic? It depends on generate phase it could be either systematic or heuristicSlide26

8-queens related questions

Can I calculate all possible answers in 8-queens problem? How?

YES-- We can propose a systematic or random to calculate all answers but the complexity in NP. (exponential time), there are algorithms can

find

a solution for n-queen in

polynomial time.

between BFS and DFS which one is better to use in n-queen problem?

Because we know the answer is at the bottom of search tree so DFS finds the answer faster (next slide)Slide27

Uninformed strategies

Uniformed cost search; not worth mentioning, no cost on edges

Bidirectional; formulating goal is difficult

Depth limited search; is ideal, even better than

DFS if

“l” is chosen cleverly.

DFS is good but it is not complete!

BFS: finds the answer but bad idea, too many useless search should be performed.Slide28

Systematic search

Systematic search is worth using only if

A: the search space is small

B: the better (heuristic) algorithm can not be found =

NP completeness problems

generally they should be avoided since they have Bad performance (exponential time = 2 ^ size)Slide29

Uninformed Search ComplexitiesSlide30

Still weak methods!

Some scientists introduced fundamental new ideas in such areas as knowledge representation, learning algorithms, neural computing and computing with words. These ideas could not be implemented then because of the limited capabilities of computers, but two decades later they have led to the development of real-life practical applications. Slide31

N-queen and today!

There exists many algorithms that can calculate n-queens now like, backtracking algorithm, heuristic algorithms, genetic algorithms…even the mentioned algorithms n-queens for n>100

How backtracking works for 4-queen?

The codes are provided at the end of slides.

How

hueristic

algorithm works for 4-queen?

Rok

Sosic

and Jun

Gu

even presented a

hueristic

algorithm that solves 3000,000 queen in 55 seconds!!Slide32

Generate and Test Varieties

systematic (uninformed) Generate and Test

hueristic

(informed) Generate and Test

Plan (informed) Generate and Test

WEAK

Strong

Expert systems: DENDRAL

Greedy, A*, simulated annealing,…

DFS,BFS,…Slide33

Generate and Test with Planning

First, the planning process uses constraint-satisfaction techniques and creates lists of recommended and contraindicated substructures. Then the generate-and-test procedure uses the lists generated and required to explore only a limited set of structures. Constrained in this way, generate-and-test proved highly effective. A major weakness of planning is that it often produces inaccurate solutions as there is no feedback from the world. But if it is used to produce only pieces of solutions then lack of detailed accuracy becomes unimportant.Slide34

So what is weak method?

Weak methods or general purpose search methods are uninformed, non heuristic searches (DFS,BFS,…) that try to find the answer in big search spaces. They don’t have high performance.

Expert systems use heuristic with the help of rules in a limited domain and limited search space. They generate the rules under Constraint satisfaction techniqueSlide35

History of AI- DENDRAL specifications

1.

DENDRAL

was a shift from general purpose, weak methods to domain specific, knowledge intensive techniques.

2. it used

heuristics

(

rule of thumb

) in the form of

rules

elicited from

human experts

3. DENDRAL captured, analysed and expressed rules an expert “know-how” =

knowledge engineering

What does “know-how” mean?!Slide36

History of AI - MYCIN

MYCIN

was a rule-based expert system for the diagnosis of infectious blood diseases. It also provided a doctor with therapeutic advice in a convenient, user-friendly manner. (

stanford

1972)Slide37

History of AI – MYCIN specifications

1. MYCIN could perform at a level equivalent to human experts in the field and considerably better than junior doctors!

2.

MYCIN’s

knowledge consisted of about 450 independent rules of IF-THEN form derived from human knowledge in a narrow domain through extensive interviewing of experts.Slide38

History of AI – MYCIN specifications

3. The knowledge incorporated in the form of rules was clearly separated from the reasoning mechanism. The system developer could easily manipulate knowledge in the system by inserting or deleting some rules. (will be covered later in session 2)

Rules incorporated in MYCIN reflected the uncertainty associated with knowledge (Self study)Slide39

History of AI – PROSPECTOR

PROSPECTOR, an expert system for mineral exploration developed by the Stanford Research Institute (

Duda

et al., 1979). The project ran from 1974 to 1983.

Most expert systems were developed based on AI languages such as LISP, PROLOG, OPS…

DENDRAL, MYCIN and PROSPECTOR were classic expert systems. Over 2500 medical expert systems were developed only in 1994!!!Slide40

History of AI – Difficulties of expert systems

Expert systems are restricted to a very narrow domain of expertise. If a patient has more than one disease, we cannot rely on MYCIN. In fact, therapy prescribed for the blood disease might even be harmful because of the other disease.

2. Because of the narrow domain, expert systems are not as robust and flexible as a user might want. When given a task different from the typical problems, an expert system might attempt to solve it and fail.Slide41

History of AI – Difficulties of expert systems

4. Expert systems have limited explanation capabilities. They can only show the sequence of the rules applied for the solution.

5. Expert systems are also difficult to verify and validate. The task of identifying incomplete and inconsistent knowledge is very difficult.

Good News: Neural Network and Fuzzy logic can complement the expert systems in some waysSlide42

History of AI – Difficulties of expert systems

6. Expert systems are built individually and cannot be developed fast. It might take from five to ten person-years to build an expert system to solve a moderately difficult problem (Waterman, 1986). Complex systems such as DENDRAL, MYCIN or PROSPECTOR can take over

30 person-years

to build!

Expert systems, especially the first generation, have little or no ability to learn from their experience.Slide43

History of AI-Expert Systems

Architecture of a simple expert systemSlide44

Artificial Neural Network

Natural Neuron

Artificial Neural NetworkSlide45

History of AI-NN

5.

How to make a machine learn, or the rebirth of neural networks (mid-1980s–onwards)

New look at neural network– stimulations:

1-Powerful PCs and workstations emerged for ANN. Previously they weren’t!

2- psychological and financial reason in 1970s

- There was a need for brain-like information

Slide46

History of AI-NN

adaptive resonance theory

, which provided the basis for a new class of neural networks.

Hopfield networks

, which attracted much attention in the 1980s (Hopfield, 1982).

Kohonen

published a paper on self-organised maps (

Kohonen

, 1982).

Barto

, Sutton and Anderson published their work on

reinforcement learning

and its application in control (

Barto

et al., 1983).Slide47

History of AI-NN

But the

real breakthrough

came in 1986 when the

back-propagation

learning algorithm, first introduced by Bryson and Ho in 1969 (Bryson and Ho,1969), was reinvented by

Rumelhart

and McClelland.

In1988,

Broomhead

and Lowe found a procedure to design

layered

feedforward

networks using radial basis functions, an alternative to multilayer

perceptrons

(

Broomhead

and Lowe, 1988).Slide48

History of AI - Genetics

General view shows how Genetic Algorithm thinksSlide49

History of AI- Genetics

6.

Evolutionary computation, or learning by doing (early 1970s–onwards)

by simulating biological evolution, we might expect to discover how living systems are propelled towards high-level intelligence.

Evolutionary computation works by simulating a population of individuals, evaluating their performance, generating a new population, and repeating this process a number of times.Slide50

History of AI-Genetics

1- Genetic Algorithm

The concept of genetic algorithms was introduced by John Holland in the early 1970s (Holland, 1975). He developed an algorithm for manipulating artificial ‘chromosomes’ (strings of binary digits), using such genetic operations as selection, crossover and mutation.Slide51

History of AI-Genetics

2-Evolutionary Strategies

In the early 1960s, independently of Holland’s genetic algorithms, Ingo

Rechenberg

and Hans-Paul

Schwefel

, students of the Technical University of Berlin, proposed a new optimisation method called evolutionary strategies (

Rechenberg

, 1965). Evolutionary strategies were designed specifically for solving parameter optimisation problems in engineering.Slide52

History of AI - Genetics

3- Genetic

Programing

Genetic programming represents an application of the genetic model of learning to programming. Its goal is to evolve not a coded representation of some problem, but rather a computer code that solves the problem. That is, genetic programming generates computer programs as the solution. (

Koza

, 1994)Slide53

History of AI- Fuzzy

Prof.

Lotfi

A.Zadeh

:

Father of Fuzzy LogicSlide54

History of AI- Fuzzy

7.

The new era of knowledge engineering, or computing with words (late 1980s–onwards)

Fuzzy logic or fuzzy set theory was introduced by Professor

Lotfi

Zadeh

, Berkeley’s electrical engineering department chairman, in 1965 (

Zadeh

, 1965). It provided a means of computing with words. It uses IF-THEN rules:

IF speed is high THEN

stopping_distance

is longSlide55

History of AI- Fuzzy

Fuzzy theory, ignored in the West, was taken seriously in the East – by the Japanese. It has been used successfully since 1987 in Japanese-designed dishwashers, washing machines, air conditioners, television sets, copiers and even cars.Slide56

History of AI - Fuzzy

Benefits of Fuzzy expert systems compare to conventional expert systems:

A)

Improved computational power:

Fuzzy rule-based systems perform faster than conventional expert systems and require fewer rules.

B)

Improved cognitive modelling

: Fuzzy systems allow the encoding of knowledge in a form that reflects the way experts think about a complex problem. (high, low, fast,…)Slide57

History of AI-Fuzzy

C)

The ability to represent multiple experts

:

Fuzzy expert systems can help to represent the expertise of multiple experts when they have opposing views. A common strategy in conventional expert systems is to find one expert!

Expert, neural and fuzzy systems no longer compete; rather they complement

each other.Slide58

From now onward

1- Introduction to knowledge-based

intelligent systems mid-term

2- Expert systems exam

3- Fuzzy Logic

4- Local search algorithms:

HC,SA,GA … Final exam

5- Artificial Neural NetworkSlide59

References

1-http://intelligence.worldofcomputing.net

2- Artificial Intelligence A Modern Approach

Stuart Russel - Peter

Noving

3- Artificial Intelligence – Michael

Negnevitsky

4- http://mhesham.wordpress.com/category/artificial-intelligence/

5- http://faculty.simpson.edu/lydia.sinapova/www/cmsc250/

LN250_Weiss/L24-BreadthDepth.htm