/
– 5/4/09 – Maximilian – 5/4/09 – Maximilian

– 5/4/09 – Maximilian - PowerPoint Presentation

tracy
tracy . @tracy
Follow
64 views
Uploaded On 2024-01-03

– 5/4/09 – Maximilian - PPT Presentation

Riesenhuber httpmaxlabneurogeorgetownedu CT2WS at Georgetown Letting your brain be all that it can be The underlying computational model of object recognition in cortex Feedforward and fast ID: 1038519

eeg signals streams related signals eeg related streams target rsvp stream performance image presentation quad targets processing attentional signal

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "– 5/4/09 – Maximilian" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1. – 5/4/09 –Maximilian Riesenhuberhttp://maxlab.neuro.georgetown.eduCT2WS at Georgetown: Letting your brain be all that it can be

2. The underlying computational model of object recognition in cortex: Feedforward and fastRiesenhuber and Poggio, Nature Neuroscience, 1999 & 2000Freedman et al., Science, 2001Riesenhuber & Poggio, CONB, 2002Freedman et al., J Neurosci, 2003Jiang et al., Neuron, 2006Jiang et al., Neuron, 2007Glezer et al., Neuron, 2009Prediction: Feed-forward processing allows object recognition even for ultra-short presentation rates

3. Experimental Paradigm: Ultrashort image presentation to probe temporal limits of object detection FixationStimulus: Target/DistractorResponseBehavioral performance at ceilingMax 40005001733500Time (ms)

4. So, how does the brain do it?And how can we visualize the results?

5. Mean target signals

6. Distractor signals

7. Mean differential signals(targets-distractors)

8. Plot signals as color image

9. EEG signals correspond well to the model

10. EEG signals correspond well to the model

11. EEG signals correspond well to the model

12. Putative feedback signals from task circuits provide additional information to read out decision-related signalsFeedback

13. Goal 1: Use hybrid brain-machine system to better exploit the brain’s temporal processing bandwidth and avoid response bottlenecksIdentify robust target-related signals in high-rate RSVP streams

14. SSSSSSFrom single images to RSVPDDT1DT2DD – DistractorsT1– 1st TargetT2 – 2nd TargetParadigm:Rapid Serial Visual Presentation (RSVP)12HzTwo targets presentedSecond one at a specific lag after the firstEmbedded within a stream of distractors83 msecTimeReport number of targets at end of streamLag 2

15. RSVP Example Movie

16. RSVP Example Movie

17. Predictions for behavior in RSVP streams based on single-image EEG signalsFor CT2WS, we are interested in RSVP scenario, i.e., continuous image streams.Predictions:Targets in rapid succession (consecutive) will be hard to identify because target-related neural responses will overlap.Successive targets at intermediate lags (T1 followed by T2) will be hard to identify because T1-related feedback signals will corrupt T2-related feedforward activation. Two different mechanisms that impair target detection in RSVP.

18. Hypotheses borne out in behavior:The Attentional BlinkGeneral finding across hundreds of studies: Subjects are impaired at detecting targets in close succession.

19. EEG signals confirm hypothesesCompare EEG signals for T1-only with Lag 1 (T1+T2): Little signal difference in visual areas. 62% correct85% correct

20. EEG signals confirm hypothesesIn “blinkers”, T1-related feedback signals interfere with T2-related feedforward signals.44% correct86% correctCompare EEG signals for T1-only with Lag 3 (T1+D+D+T2)

21. However, we find that the Attentional Blink can be reduced by increasing subject vigilance!Subjects were told about the Attentional Blink and instructed to “pay attention after you see the first target, because the second target can come immediately after and look like it is part of the first.”

22. Strong T2 signal in vigilant subject83 % correct

23. Comparing signals before and after vigilance: Attentional boost!Vigilance appears to increase performance by reducing interference at the neural level through decreasing T1-related feedback and increasing T2-related feedforward signals!

24. GU goal 2: Exploit the brain’s parallel processing capabilitiesParallel processing along the visual cortical hierarchy: potential to present and process multiple images simultaneously!

25. Proof of principle: Quad stream presentation (48 images/s)

26. Proof of principle: Quad stream presentation (48 images/s)

27. EEG signal shows robust detection signal Washout in visual areas because of varying target position

28. Quad stream performance follows parallel process predictionN=4

29. From “bench to bread truck”There are two main Phase I results from the GU project that can be translated to the NPU in Phase II:High RSVP rates: ≥12 HzMultiple concurrent image streams

30. Robust EEG-based classification performance at 12 Hz

31. Boost classification performance through baggingAlready high classification performance can be further increased by combining signals from repeated presentations (bagging). Here: Two presentations.N=3Target vs. distractor at 12Hz, single stream

32. From single to multiple streams: Quad stream performanceClassification performance again follows behavior.

33. Bagging also work for quad stream case

34. GU results suggest potential for flexible speed/accuracy trade-offCan adjust number of streams/repeats per image depending on operational demands (e.g., quick overview over new scenario vs. point control).Can also flexibly adjust number of streams/presentation frame rate depending on user state.

35. Implications of GU findings for throughputIn Yuma test, had about 768 ROI for 32 targetsTIME = 24 X 32 X 3.29 X 510 X 1= 1263 secs = 21 minSystem tested at YumaTIME = 24 X 32 X 212 X 1= 128 secs = 2.1 minDouble-Bagging, 12 Hz = optimized for accuracyTIME = 24 X 32 X 112 X 4= 16 secs = 0.3 min12 Hz, four parallel streams = optimized for speed

36. GU Phase II plans“Boost mode”: Selectively boost image-related signals through attentional cueing.Explore limits of temporal and parallel processing: 12 Hz and quad streams was a convenient choice, but it’s not the limit!Monitor ongoing activity (e.g., in active exploration/binocular mode): Make sure nothing is missed.“Smart Cueing”: Decrease neural signal interference (temporal as well as spatial) in high-throughput scenarios by sorting ROI to decrease interference along cortical processing hierarchy.Optimize classification by bagging specialized classifiers (e.g., for incorrect trials).