/
AI Definitions The study of how to make programs/computers do things that people do better AI Definitions The study of how to make programs/computers do things that people do better

AI Definitions The study of how to make programs/computers do things that people do better - PowerPoint Presentation

catherine
catherine . @catherine
Follow
27 views
Uploaded On 2024-02-09

AI Definitions The study of how to make programs/computers do things that people do better - PPT Presentation

The study of how to make computers solve problems which require knowledge and intelligence The exciting new effort to make computers think machines with minds The automation of activities that we associate with human thinking eg decisionmaking learning ID: 1045716

problem knowledge eliza alice knowledge problem alice eliza system search rules computer solve problems actions long solving intelligence match

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "AI Definitions The study of how to make ..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1. AI DefinitionsThe study of how to make programs/computers do things that people do betterThe study of how to make computers solve problems which require knowledge and intelligenceThe exciting new effort to make computers think … machines with mindsThe automation of activities that we associate with human thinking (e.g., decision-making, learning…)The art of creating machines that perform functions that require intelligence when performed by peopleThe study of mental faculties through the use of computational modelsA field of study that seeks to explain and emulate intelligent behavior in terms of computational processesThe branch of computer science that is concerned with the automation of intelligent behaviorThinking machines or machine intelligenceStudying cognitive facultiesProblem Solving and CS

2. What is Intelligence?Is there a “holistic” definition for intelligence? Here are some definitions:the ability to comprehend; to understand and profit from experience a general mental capability that involves the ability to reason, plan, solve problems, think abstractly, comprehend ideas and language, and learn is effectively perceiving, interpreting and responding to the environment None of these tells us what intelligence is but we can enumerate a list of elements that intelligence must performperceive, reason, solve problems, learn/adapt, common sense, analogy, recall, intuition, emotional states, achieve self-awarenessWhich of these are necessary for intelligence? Which are sufficient?Artificial Intelligence – should we define this in terms of human intelligence?

3. Physical Symbol System HypothesisA physical symbol system (PSS)consists of symbols (patterns) which are combinable into expressionsthere are processes which operate on these symbols to create new symbols and expressionsEnglish is a physical symbol system: symbols are the alphabet, expressions are words and sentencesThe PSS Hypothesis states that a PSS has the necessary and sufficient means for intelligent actiona computer is a PSS if the PSS Hypothesis is true, then it should be possible to program a computer to produce intelligent actions this is the (or a) goal of AIif the PSS Hypothesis is not true, what are the consequences for AI?

4. The Turing Test1950 – Alan Turing devised a test for intelligence called the Imitation GameAsk questions of two entities, receive answers from bothIf you can’t tell which of the entities is human and which is a computer program, then you are fooled and we should therefore consider the computer to be intelligentQuestionsAnswersAnswersWhich is the person?Which is the computer?

5. Solving the Turing Test: ElizaCreated by Joseph WeizenbaumThis system would act like a Rogerian psychoanalyst by holding a dialog with a personthe dialog would be consist of the doctor (Eliza) asking questions, the human responding, and the doctor using the response to ask another questionWeizenbaum wrote the program so that it would generate an English response/question based on a group of patternsIf the user sentence matched a pattern, this pattern would be used to generate the next sentence/questionA more recent, and much superior version is called A.L.I.C.E,

6. Example Eliza DialogH: Men are all alikeE: In what way?H: They’re always bugging us about something or another.E: Can you be more specific?H: Well, my boyfriend made me come here.E: Your boyfriend made you come here?H: He says I’m depressed much of the time.E: I am sorry to hear you are depressed.H: It’s true. I am unhappy.E: Do you think coming here will help you not to be unhappy?H: I need some help, that much seems certain.E: What would it mean to you if you got some help?H: Perhaps I could learn to get along with my mother.E: Tell me more about your family

7. Eliza Rules (?* ?x) hello (?* ?y)  (How do you do. Please state your problem.) (?* ?x) I want (?* ?y)  (What would it mean if you got ?y) (Why do you want ?y) (Suppose you got ?y soon) (?* ?x) if (?* ?y)  (Do you really think its likely that ?y) (Do you wish that ?y) (What do you think about ?y) (Really-- if ?y) (?* ?x) no (?* ?y)  (Why not?) (You are being a bit negative) (Are you saying "NO" just to be negative?) (?* ?x) I was (?* ?y)  (Were you really?) (Perhaps I already knew you were ?y) (Why do you tell me you were ?y now?) (?* ?x) I feel (?* ?y)  (Do you often feel ?y ?) (?* ?x) I felt (?* ?y)  (What other feelings do you have?)Eliza also has a rule to swap‘I’ and ‘my’ to ‘you’ and ‘your’

8. Eliza Pattern Syntaxpat  var match any one expression to a variable constant or to a constant (see below) segment-pat match against a sequence single-pat match against one expression (pat . pat) match the first and the rest of a listsingle-pat  (?is var predicate) test predicate on one expression (?or pat1 pat2 …) match on any of the patterns (?and pat1 pat2 …) match on every of the expressions (?not pat) match if expression does not matchsegment-pat  ((?* var) …) match on zero or more expressions ((?+ var) …) match on one or more expressions ((?? var) …) match zero or one expression ((?if expr) …) test if expression is truevar  ?chars variables of the form ?nameconstant  atom symbols, numbers, chars

9. ALICE DemoJerry: my name is jerry.ALICE: Hey jerry.Jerry: do you know Turing?ALICE: A brilliant and tragic figure in the history of computer science.Jerry: who else do you know?ALICE: Hugh Loebner, Robby Garner, and Steven SpielbergJerry: which film do you like?ALICE: My favorite movie is Teknolust. Have you seen it?Jerry: no.ALICE: I would recommend you check it out.Jerry: what is it about?ALICE: Science fiction.Jerry: do you remember my name?ALICE: Your name is jerry, seeker.Notice unlike Eliza, ALICE has a memory, can handle references(have you seen it? no. I would recommend it.)

10. ALICE vs. ElizaALICE talking to ELIZAELIZA: I am ELIZA.ALICE: I am very pleased to meet you ELIZA.ELIZA: How can I help you?ALICE: Just keep talking to me as long as you can.ELIZA: Do you wish to as long as me?ALICE: OK, I will as long as you.ELIZA: Do you wish to will as long as me?ALICE: OK, I will will as long as you.ELIZA: Do you wish to will will as long as me?ALICE: OK, I will will will as long as you.ELIZA: Do you wish to will will will as long as me?ALICE: OK, I will will will will as long as you.ELIZA: Do you wish to will will will will as long as me?ALICE: OK, I will will will will will as long as you.ELIZA: Do you wish to will will will will will as long as me?…Eliza gets stuck on the phrase “I will” and then ALICE gets stuckon the same phrase

11. How Useful is the Turing Test?With Eliza or Alice like rules, we can eventually solve the Turing Test – it just takes writing enough rulesDoes the system understand what it is responding to?No, neither Eliza nor Alice understand the text, its just that Alice has better, more in depth and wider ranging rulesWe could build a representation that models some real-world domain and knowledge baseThe system can fill in information from the conversationQuestions can be responded to by looking up the stored dataIn this way, the system is responding, not based merely on “canned” knowledge, but on knowledge that it has “learned”Does this imply that the system knows what it is discussing?What does it mean to know something?

12. Table-Lookup vs. ReasoningConsider two approaches to programming a Tic-Tac-Toe playerSolution 1: pre-enumerated list of best moves given board configurationSolution 2: rules to evaluate board configuration and generate best moveSolution 1 is similar to how Eliza worksThis is not practical for most types of problems Solution 2 will reason out the best moveSuch a player might even be able to “explain” why it chose the move it didWe can (potentially) build a program that can pass the Turing Test using table-lookup even though it would be a large undertaking

13. The Chinese Room ProblemFrom John Searle, Philosopher, in an attempt to demonstrate that computers cannot be intelligentThe room consists of you, a book, a storage area (optional), and a mechanism for moving information to and from the room to the outsidea Chinese speaking individual provides a question for you in writingyou are able to find a matching set of symbols in the book (and storage) and write a response, also in ChineseQuestion (Chinese)Book of Chinese SymbolsAnswer (Chinese)StorageYou

14. Chinese Room: An Analogy for a ComputerUserInput I/O pathway (bus) OutputMemory Program/Data(Script) CPU (SAM)

15. Searle’s QuestionYou were able to solve the problem of communicating with the person/user and thus you/the room passes the Turing TestBut did you understand the Chinese messages being communicated?since you do not speak Chinese, you did not understand the symbols in the question, the answer, or the storagecan we say that you actually used any intelligence?By analogy, since you did not understand the symbols that you interacted with, neither does the computer understand the symbols that it interacts with (input, output, program code, data)He defines to categories of AI:strong AI – the pursuit of machine intelligenceweak AI – the pursuit of machines solving problems in an intelligent way

16. Where is the Intelligence Coming From?The System’s Response:the hardware by itself is not intelligent, but a combination of the hardware, software and storage is intelligentin a similar vein, we might say that a human brain that has had no opportunity to learn anything cannot be intelligent, it is just the hardwareThe Robot Response:a computer is void of senses and therefore symbols are meaningless to it, but a robot with sensors can tie its symbols to its senses and thus understand symbolsThe Brain Simulator Response:if we program a computer to mimic the brain (e.g., with a neural network) then the computer will have the same ability to understand as a human brain

17. Two AI AssumptionsWe can understand and model cognition without understanding the underlying mechanismit is the model of cognition that is important not the physical mechanism that implements itif true, we should be able to create cognition (mind) out of a computer or a brain or other devices such as mechanical devicesthis is the assumption made by symbolic AI researchersCognition will emerge from the proper mechanismthe right device, fed with the right inputs, can learn and perform the problem solving that we, as observers, call intelligencecognition will arise as the result (or side effect) of the hardwarethis is the assumption made by connectionist AI researchersNotice that while the two assumptions differ, neither is necessarily mutually exclusive and both support the idea that cognition is computational

18. A Brief History of AI: 1950sComputers were thought of as an electronic brainsTerm “Artificial Intelligence” coined by John McCarthyJohn McCarthy also created Lisp in the late 1950sAlan Turing defines intelligence as passing the Imitation Game (Turing Test)AI research largely revolves around toy domainsComputers of the era didn’t have enough power or memory to solve useful problemsProblems being researched include games (e.g., checkers) primitive machine translationblocks world (planning and natural language understanding within the toy domain)early neural networks researched: the perceptronautomated theorem proving and mathematics problem solving

19. The 1960sAI attempts to move beyond toy domainsSyntactic knowledge alone does not work, domain knowledge requiredEarly machine translation could translate English to Russian (“the spirit is willing but the flesh is weak” becomes “the vodka is good but the meat is spoiled”)Earliest expert system created: DendralPerceptron research comes to a grinding halt when it is proved that a perceptron cannot learn the XOR operatorUS sponsored research into AI targets specific areas – not including machine translationWeizenbaum creates Eliza to demonstrate the futility of AI

20. 1970sAI researchers address real-world problems and solutions through expert (knowledge-based) systemsMedical diagnosisSpeech recognitionPlanningDesignUncertainty handling implementedFuzzy logicCertainty factorsBayesian probabilitiesAI begins to get noticed due to these successesAI research increasedAI labs sprouting up everywhereAI shells (tools) createdAI machines available for Lisp programmingCriticism: AI systems are too brittle, AI systems take too much time and effort to create, AI systems do not learn

21. 1980s: AI WinterFunding dries up leading to the AI WinterToo many expectations were not metExpert systems took too long to develop, too much money to invest, the results did not pay offNeural Networks to the rescue!Expert systems took programming, and took dozens of man-years of efforts to develop, but if we could get the computer to learn how to solve the problem…Multi-layered back-propagation networks got around the problems of perceptronsNeural network research heavily funded because it promised to solve the problems that symbolic AI could notBy 1990, funding for neural network research was slowly disappearing as wellNeural networks had their own problems and largely could not solve a majority of the AI problems being investigatedPanic! How can AI continue without funding?

22. 1990s: ALifeThe dumbest smart thing you can do is staying aliveWe start over – lets not create intelligence, lets just create “life” and slowly build towards intelligenceAlife is the lower bound of AIAlife includes evolutionary learning techniques (genetic algorithms)artificial neural networks for additional forms of learningperception, motor control and adaptive systemsmodeling the environmentProblems: genetic algorithms are useful in solving some optimization problems and some search-based problems, but not very useful for expert problemsPerceptual problems are among the most difficult being solved, very slow progress

23. Today: The New (Old) AIAI researchers today are not doing “AI”, they are doingIntelligent agents, multi-agent systems/collaboration, ontologiesMachine learning and data miningAdaptive and perceptual systemsRobotics, path planningSearch engines, filtering, recommendation systemsAreas of current research interestNLU/Information Retrieval, Speech RecognitionPlanning/Design, Diagnosis/InterpretationSensor Interpretation, Perception, Visual Understanding RoboticsApproachesKnowledge-basedOntologiesProbabilistic (HMM, Bayesian Nets)Neural Networks, Fuzzy Logic, Genetic Algorithms

24. So What Does AI Do?Most AI research has fallen into one of two categoriesSelect a specific problem to solvestudy the problem (perhaps how humans solve it)come up with the proper representation for any knowledge needed to solve the problemacquire and codify that knowledgebuild a problem solving systemSelect a category of problem or cognitive activity (e.g., learning, natural language understanding)theorize a way to solve the given problembuild systems based on the model behind your theory as experimentsmodify as neededBoth approaches requireone or more representational forms for the knowledgesome way to select proper knowledge, that is, search

25. Knowledge RepresentationsOne large distinction between an AI system and a normal piece of software is that an AI system must reason using worldly knowledgeWhat types of knowledge?FactsAxiomsStatements (which may or may not be true)RulesCasesExperiencesAssociations (which may not be truth preserving)DescriptionsProbabilities and Statistics

26. Types of RepresentationsEarly systems used either semantic networks or predicate calculus to represent knowledgeor used simple search spaces if the domain/problem had very limited amounts of knowledge (e.g., simple planning as in blocks world)With the early expert systems in the 70s, a significant shift took place to production systems, which combined representation and process (chaining) and even uncertainty handling (certainty factors)later, frames (an early version of OOP) were introducedProblem-specific approaches were introduced such as scripts and CDs for language representationIn the 1980s, there was a shift from rules to model-based approachesSince the 1990s, Bayesian networks and hidden Markov Models have become popularFirst, we will take a brief look at some of the representations

27. Search SpacesGiven a problem expressed as a state space (whether explicitly or implicitly)Formally, we define a search space as [N, A, S, GD]N = set of nodes or states of a graphA = set of arcs (edges) between nodes that correspond to the steps in the problem (the legal actions or operators)S = a nonempty subset of N that represents start statesGD = a nonempty subset of N that represents goal statesOur problem becomes one of traversing the graph from a node in S to a node in GDExample: 3 missionaries and 3 cannibals are on one side of the river with a boat that can take exactly 2 people across the riverhow can we move the 3 missionaries and 3 cannibals across the river such that the cannibals never outnumber the missionaries on either side of the river (lest the cannibals start eating the missionaries!)

28. M/C SolutionWe can represent a state as a 6-item tuple: (a, b, c, d, e, f) a/b = number of missionaries/cannibals on left shorec/d = number of missionaries/cannibals in boate/f = number of missionaries/cannibals on right shorewhere a + b + c + d + e + f = 6 a >= b (unless a = 0), c >= d (unless c = 0), and e >= f (unless e = 0)Legal operations (moves) are 0, 1, 2 missionaries get into boat0, 1, 2 missionaries get out of boat0, 1, 2 cannibals get into boat0, 1, 2 missionaries get out of boatboat sails from left shore to right shoreboat sails from right shore to left shore

29. Search Spaces and Types of SearchThe search space consists of all possible states of the problem as it is being solvedA search space is often viewed as a tree and can very well consist of an exponential number of nodes making the search process intractableSearch spaces might be pre-enumerated or generated during the search processSome search algorithms may search the entire space until a solution is found, others will only search parts of the space, possibly selecting where to search through a heuristicSearch spaces includeGame trees like the tic-tac-toe gameDecision trees (see next slides)Combinations of rules to select in a production systemNetworks of various forms (see next slides)Other types of spaces

30.

31.

32. Search Algorithms and RepresentationsBreadth-firstDepth-firstBest-first (Heuristic Search)A*Hill ClimbingLimiting the number of PliesMinimaxAlpha-Beta PruningAdding ConstraintsGenetic AlgorithmsForward vs Backward ChainingProduction systemsIf-then rulesPredicate calculus rulesOperatorsSemantic networksFramesScriptsKnowledge groupsModels, casesAgentsOntologies

33. RelationshipsWe often know stuff about objects (whether physical or abstract)These objects have attributes (components, values) and/or relationships with other thingsOne way to represent knowledge is to enumerate the objects and describe them through their attributes and relationshipsCommon forms of such relationship representations aresemantic networks – a network consists of nodes which are objects and values, and edges (links/arcs) which are annotated to include how the nodes are relatedpredicate calculus – predicates are often relationships and arguments for the predicates are objectsframes – in essence, objects (from object-oriented programming) where attributes are the data members and the values are the specific values stored in those members – in some cases, they are pointers to other objects

34. Representations With RelationshipsHere, we see the same information beingrepresented using two different representationaltechniques – a semantic network (above) andpredicates (to the left)

35. Another Example: Blocks WorldHere we see a real-world situation of three blocks and a predicatecalculus representation for expressing this knowledgeWe equip our system with rules such as the below rule to reasonover how to draw conclusions and manipulate this block’s worldThis rule says “if there does not exist a Ythat is on X, then X is clear

36. Semantic NetworksCollins and Quillian were the first to use semantic networks in AI by storing in the network the objects and their relationshipstheir intention was to represent English sentencesedges would typically be annotated with these descriptors or relationsisa – class/subclassinstance – the first object is an instance of the classhas – contains or has this as a physical propertycan – has the ability to made of, color, texture, etcA semantic network to represent the sentences “a canary can sing/fly”, “a canary is a bird/animal”, “a canary is a canary”, “a canary has skin”

37. FramesThe semantic network requires a graph representation which may not be a very efficient use of memoryAnother representation is the framethe idea behind a frame was originally that it would represent a “frame of memory” – for instance, by capturing the objects and their attributes for a given situation or moment in timea frame would contain slots where a slot could contain identification information (including whether this frame is a subclass of another frame)relationships to other framesdescriptors of this frameprocedural information on how to use this frame (code to be executed)defaults for slotsinstance information (or an identification of whether the frame represents a class or an instance)

38. Frame ExampleHere is a partial frame representing a hotel roomThe room contains a chair, bed, and phone where the bed contains a mattress and a bed frame (not shown)

39. Production SystemsA production system is a set of rules (if-then or condition-action statements)working memory the current state of the problem solving, which includes new pieces of information created by previously applied rulesinference engine (the author calls this a “recognize-act” cycle)forward-chaining, backward-chaining, a combination, or some other form of reasoning such as a sponsor-selector, or agenda-driven schedulerconflict resolution strategywhen it comes to selecting a rule, there may be several applicable rules, which one should we select? the choice may be based on a conflict resolution strategy such as “first rule”, “most specific rule”, “most salient rule”, “rule with most actions”, “random”, etc

40. ChainingThe idea behind a production system’s reasoning is that rules will describe steps in the problem solving space where a rule mightbe an operation in a game like a chess movetranslate a piece of input data into an intermediate conclusionpiece together several intermediate conclusions into a specific conclusiontranslate a goal into substepsSo a solution using a production system is a collection of rules that are chained togetherforward chaining – reasoning from data to conclusions where working memory is sought for conditions that match the left-hand side of the given rulesbackward chaining – reasoning from goals to operations where an initial goal is unfolded into the steps needed to solve that goal, that is, the process is one of subgoaling

41. Two Example Production Systems

42. Example System: Water JugsProblem: given a 4-gallon jug (X) and a 3-gallon jug (Y), fill X with exactly 2 gallons of water assume an infinite amount of water is availableRules/operators1. If X = 0 then X = 4 (fill X)2. If Y = 0 then Y = 3 (fill Y)3. If X > 0 then X = 0 (empty X)4. If Y > 0 then Y = 0 (empty Y)5. If X + Y >= 3 and X > 0 then X = X – (3 – y) and Y = 3 (fill Y from X)6. If X + Y >= 4 and Y > 0 then X = 4 and Y = Y – (4 – X) (fill X from Y)7. If X + Y <= 3 and X > 0 then X = 0 and Y = X + Y (empty X into Y)8. If X + Y <= 4 and Y > 0 then X = X + Y and Y = 0 (empty Y into X)rule numbers used on the next slide

43. Conflict Resolution StrategiesIn a production system, what happens when more than one rule matches? a conflict resolution strategy dictates how to select from between multiple matching rulesSimple conflict resolution strategies includerandomfirst matchmost/least recently matched rulerule which has matched for the longest/shortest number of cycles (refractoriness)most salient rule (each rule is given a salience before you run the production system)More complex resolution strategies might select the rule with the most/least number of conditions (specificity/generality) or most/least number of actions (biggest/smallest change to the state)

44. MYCINBy the early 1970s, the production system approach was found to be more than adequate for constructing large scale expert systemsin 1971, researchers at Stanford began constructing MYCIN, a medical diagnostic systemit contained a very large rule baseit used backward chainingto deal with the uncertainty of medical knowledge, it introduced certainty factors (sort of like probabilities)in 1975, it was tested against medical experts and performed as well or better than the doctors it was compared to(defrule 52 if (site culture is blood) (gram organism is neg) (morphology organism is rod) (burn patient is serious)then .4 (identity organism is pseudomonas))If the culture was taken from the patient’s blood and the gram of the organism is negative and the morphology of the organism is rods and the patient is a serious burn patient, then conclude that the identity of the organism is pseudomonas (.4 certainty)

45. MYCIN in OperationMycin’s process starts with “diagnose-and-treat”repeatidentify all rules that can provide the conclusion currently soughtmatch right hand sides (that is, search for rules whose right hand sides match anything in working memory)use conflict resolution to identify a single rulefire that rulefind and remove a piece of knowledge which is no longer neededfind and modify a piece of knowledge now that more specific information is knownadd a new subgoal (left-hand side conditions that need to be proved)until the action done is added to working memoryMycin would first identify the illness, possibly ordering more tests to be performed, and then given the illness, generate a treatmentMycin consisted of about 600 rules

46. R1/XCONAnother success story is DEC’s R1 later renamed XCONThis system would take customer orders and configure specific VAX computers for those orders including completing the order if the order was incompletehow the various components (drive and tape units, mother board(s), etc) would be placed inside the mainframe cabinet)how the wiring would take place among the various componentsR1 would perform forward chaining over about 10,000 rulesover a 6 year period, it configured some 80,000 orders with a 95-98% accuracy ratingironically, whereas planning/design is viewed as a backward chaining task, R1 used forward chaining because, in this particular case, the problem is data driven, starting with user input of the computer system’s specificationsR1’s solutions were similar in quality to human solutions

47. R1 Sample RulesConstraint rulesif device requires battery then select battery for deviceif select battery for device then pick battery with voltage(battery) = voltage(device)Configuration rulesif we are in the floor plan stage and there is space for a power supply and there is no power supply available then add a power supply to the orderif step is configuring, propose alternatives and there is an unconfigured device and no container was chosen and no other device that can hold it was chosen and selecting a container wasn’t proposed yet and no problems for selecting containers were identified then propose selecting a container if the step is distributing a massbus device and there is a single port disk drive that has not been assigned to a massbus and there are no unassigned dual port disk drives and the number of devices that each massbus should support is known and there is a massbus that has been assigned at least one disk drive and that should support additional disk drives and the type of cable needed to connect the disk drive is known, then assign the disk drive to this massbus

48. Strong Slot-n-Filler StructuresTo avoid the difficulties with Frames and Nets, Schank and Rieger offered two network-like representations that would have implied uses and built-in semantics: conceptual dependencies and scriptsthe conceptual dependency was derived as a form of semantic network that would have specific types of links to be used for representing specific pieces of information in English sentencesthe action of the sentencethe objects affected by the action or that brought about the actionmodifiers of both actions and objectsthey defined 11 primitive actions, called ACTsevery possible action can be categorized as one of these 11an ACT would form the center of the CD, with links attaching the objects and modifiers

49. Example CDThe sentence is “John ate the egg”The INGEST act means to ingest an object (eat, drink, swallow)the P above the double arrow indicates past testthe INGEST action must have an object (the O indicates it was the object Egg) and a direction (the object went from John’s mouth to John’s insides)we might infer that it was “an egg” instead of “the egg” as there is nothing specific to indicate which egg was eatenwe might also infer that John swallowed the egg whole as there is nothing to indicate that John chewed the egg!

50. The CD Theory ACTsIs this list complete?what actions are missing?Could we reduce this list to make it more concise?other researchers have developed other lists of primitive actions including just 3 – physical actions, mental actions and abstract actions

51. Example CD Links

52. Example CDs

53. More Examples

54. Complex ExampleThe sentence is “John prevented Mary from giving a book to Bill”This sentence has two ACTs, DO and ATRANSDO was not in the list of 11, but can be thought of as “caused to happen”The c/ means a negative conditional, in this case it means that John caused this not to happenThe ATRANS is a giving relationship with the object being a Book and the action being from Mary to Bill – “Mary gave a book to Bill”like with the previous example, there is no way of telling whether it is “a book” or “the book”

55. ScriptsThe other structured representation developed by Schank (along with Abelson) is the scripta description of the typical actions that are involved in a typical situationthey defined a script for going to a restaurantscripts provide an ability for default reasoning when information is not available that directly states that an action occurredso we may assume, unless otherwise stated, that a diner at a restaurant was served food, that the diner paid for the food, and that the diner was served by a waiter/waitressA script would containentry condition(s) and results (exit conditions)actors (the people involved)props (physical items at the location used by the actors)scenes (individual events that take place)The script would use the 11 ACTs from CD theory

56. Restaurant ScriptThe script does not contain atypical actions although there are options such as whether the customer was pleased or notThere are multiple paths through the scenes to make for a robust scriptwhat would a “going to the movies” script look like? would it have similar props, actors, scenes? how about “going to class”?

57. Knowledge GroupsOne of the drawbacks of the knowledge representations demonstrated thus far is that all knowledge is grouped into a single, large collection of representationsthe rules taken as a whole for instance don’t denote what rules should be used in what circumstanceAnother approach is to divide the representations into logical groupingsthis permits easier design, implementation, testing and debugging because you know what that particular group is supposed to do and what knowledge should go into itit should be noted that by distributing the knowledge, we might use different problem solving agents for each set of knowledge so that the knowledge is stored using different representations

58. Knowledge Sources and AgentsWhich leads us to the idea of having multiple problem solving agentseach agent is responsible for solving some specialized type of problem(s) and knows where to obtain its own inputeach agent has its own knowledge sources, some internal, some externalsince external agents may have their own forms of representation, the agent must know how to find the proper agents how to properly communicate with these other agentshow to interpret the information that it receives from these agentshow to recover from a situation where the expected agent(s) is/are not available

59. What is an Agent?Agents are interactive problem solvers that have these properties situated – the agent is part of the problem solving environment – it can obtain its own input from its environment and it can affect its environment through its outputautonomous – the agent operates independently of other agents and can control its own actions and internal statesflexible – the agent is both responsive and proactive – it can go out and find what it needs to solve its problem(s)social – the agent can interact with other agents including humansSome researchers also insist that agents havemobility – have the ability to move from their current environment to a new environment (e.g., migrate to another processor)delegation – hand off portions of the problem to other agentscooperation – if multiple agents are tasked with the same problem, can their solutions be combined?