/
The  Neuroidal  Model by Leslie Valiant The  Neuroidal  Model by Leslie Valiant

The Neuroidal Model by Leslie Valiant - PowerPoint Presentation

ethlyn
ethlyn . @ethlyn
Follow
66 views
Uploaded On 2023-09-19

The Neuroidal Model by Leslie Valiant - PPT Presentation

Nishanth Dikkala March 13 2020 Goal Feasible computational model of brain which also performs learning Theory to explain some fundamental capabilities of brain Memorization A central challenge a concrete formulation ID: 1018011

prompt neuroids step nodes neuroids prompt nodes step supervised input relay model set items inductive frontier threshold properties update

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "The Neuroidal Model by Leslie Valiant" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1. The Neuroidal Model by Leslie ValiantNishanth DikkalaMarch 13, 2020

2. GoalFeasible computational model of brain which also performs learningTheory to explain some fundamental capabilities of brainMemorizationA central challenge: a concrete formulation3 aspects to formalizeFunctions to be computedModel of computationAlgorithmsProposed model should satisfy biological constraints

3. Some Biological ConstraintsPyramidal cells ~ No new neurons after birthBelieved that no new connections are made among neurons during lifetimePrimary mechanism for learning is weight changeCertain areas designated as sensory areas – process information from senses.Highly uniform physiology of cortexSome other quantitative constraints.. 

4. The Model of Computation

5. The Neuroidal Model – Neuroidal Tabula Rasa (NTR)G – directed graph G = (V,E), |V|=nEach vertex represents a neuroidW = set of edge weightsX = set of modes for a neuroidConsist of state and threshold = mode update fn = weight update fnG can have bidirectional edges.    

6. Working of a NeuroidTiming discrete time steps 0,1,2,… All neuroids have synchronized identical clocksNot too unrealistic biologicallyActivation (mode update) (weight update)Firing: if .Instantaneous and stops firing by end of next time stepStates are denoted by strings in algorithms.e.g. AR = available relayon firing, ‘F’ appended at end, i.e. AR ARFFiring transitions (e.g. AR ARF) always happen if threshold conditions met.     

7. The Neuroidal ModelGiven above, two things remain:Initial Conditions (IC):Starting weights, states and thresholds of all neuroidsInput Sequence (IS): Peripherals which connect to external sensors. Connect to certain base neuroids.IS specifies timing of firing of the base neuroids controlled by peripherals

8. The Functions to be Computed

9. Formalizing Cognitive FunctionMany different schools of thoughtDifficult to describe total behavior in full complexityDifficult to decompose into simpler constituentsMathematical description of mental faculties – centuries old challengeBoolean algebraUseful starting pointDiscrete representationsActual implementation “fuzzy/soft”Conjunctions, disjunctions, DNFs

10. Formalizing Cognitive Function: Knowledge RepresentationPositive Knowledge Representations:Each neuroid corresponds to a semantic itemSeveral neuroids represent single item.Only those new items added to memory which are experienced.Hierarchical – high level items represented as Boolean functions of lower-level items e.g. yellow carGraded – only approximate Boolean functions representedSemantic item set of neuroids (r per item) 

11. Desired Properties from GWhat do we need from G to enable a viable system for cognitive functions according to our knowledge representation model?All algos in the model are vicinal Communication between two items not directly connected happens via common neighbors = nodes connected to at least 1 neuroid in .Frontier: – will be used to represent 3 properties requiredNon-empty frontiersHashing property: frontier is not already allocated to another itemSufficiently large set of relay nodes: for any two items want some nodes which connect them in both directions 

12. Desired Properties from GAll 3 can be achieved approximately using random graphs when .Directed with 3 properties requiredSufficiently large frontier: Hashing: w.h.p., nodes in aren’t assigned to some other .Sufficiently large set of relay nodes: Given and Pristine conditions assumptions:Number of neuroids representing an item is exactly rEdges towards neuroids not yet allocated are present with equal probability independently. 

13. Desired Properties from GSufficiently large frontier propertyGiven any node i not in or ,  

14. Formalizing Cognitive FunctionsSupervisedUnsupervisedMemorization and Deductive LearningSupervised MemorizationUnsupervised MemorizationInductive LearningSupervised Inductive LearningUnsupervised Inductive LearningClassifying Learning Tasks

15. Algorithms

16. Unsupervised MemorizationInformal Task: following presentation of an input, remember it so that neuroidal system recognizes future presentations of the same input.Main challenge – storage allocationFormal Task: Input with semantic attributes represented in NTR by . Allocate new set of neuroids that corresponds to and in future fires iff all fire.Start simple: Allocate the frontier  

17. Unsupervised MemorizationFirst attemptAll nodes in frontier start in state Available Memory (AM). Incoming weights are all 1, threshold is infinite. STEP 0: Prompt:  

18. Unsupervised MemorizationSTEP 0: Prompt: Issue: Node connected to 2 of and 0 of will also change state. redundancy 

19. Unsupervised MemorizationAlgorithm without redundancySTEP 0: Prompt STEP 1: Prompt  

20. Supervised MemorizationTask: Given presentation of an input and a label for the input, associate the neuroids representing the input to the neuroids representing the label.Formal task: Given input with semantic attributes represented in NTR by and a label , associate with . In future, whenever all fire, must fire. 

21. Supervised MemorizationFirst approachAssume relay nodes have weight 1 and threshold 1. so relay activations instantaneously.STEP 0: Prompt STEP 1: Prompt Issue: other nodes connected to relay nodes can trigger them in future. Small prob. of occurring.  

22. Supervised MemorizationRobust Approach – use bi-directional edges.Each relay node starts in Available Relay (AR). Initial T=.incoming weights all 1, outgoing weights all 0.STEP 0: Prompt STEP 1: Prompt  

23. Supervised MemorizationSTEP 2: Prompt STEP 3: Prompt (required as separate step due tothreshold update) 

24. Supervised Inductive LearningThrough examples, extrapolate to unseen inputs.PAC-Learning modelAnswer is an approximation to correct answer most of the timeProbability of correctness and accuracy improve with more samplesA simple example: Learning conjunctionsGiven Boolean variables , the true function is a conjunction of a subset of them (e.g. ). Input: examples . (no noise setting)Output: Learnt such that w.p >9/10 . 

25. Supervised Inductive LearningSimple elimination algorithmOn first positive example, perform analog of SM – to remember all items involved in the example. The nodes are in state SC (Supervised Conjunction) after this step.On every further positive example perform the following:STEP: Prompt  

26. Supervised Inductive LearningLearning linear threshold functions 

27. Supervised Inductive LearningPerceptron Algorithm:Start with for all . Threshold .Given an example and label : if agrees with then do nothing, else Easy to implement using neuroids.