Speech and Language Technology For Dialogbased CALL Gary Geunbae Lee POSTECH Outline Introduction 1 Spoken Dialog Systems 2 4 PESAA Postech English Speaking Assessment and Assistant 5 ID: 382113
Download Presentation The PPT/PDF document " 382113" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Speech and Language Technology
For Dialog-based CALL
Gary Geunbae Lee, POSTECHSlide2
Outline
Introduction
1
Spoken Dialog Systems
2
4
PESAA:
Postech
English Speaking Assessment and Assistant
5
Field Study
3
DBCALL: Educational Error HandlingSlide3
iNTRODUCTIONCHAPTER 1Slide4
English Tutoring Methods
Tranditional Approches
CALL Approches
<CMC>
<ICALL>
<Classroom>
<Textbook>
<Multimedia>Slide5
Socio-Economic Effects
Changing our current foreign language education
system in public schools
From vocabulary and grammar methodology To speaking ability
Significant effect of decreasing private English education
fee
private English education fee in Korea, reaching up to
16 trillion won annually
Expect the effect of the overseas export
Japan, China, etc.Slide6
Interdiciplinary Research
NLP
• Dialog Management
•
Error Detection
•
Corrective Feedback
• Comprehensible
Input and Output
• Corrective Feedback
• Attitude & Motivation
SLA
Evaluation
• Cognitive Effect
•
Affective Effect
Slide7
Second Language Acquisition Theory
Second Language Acquisition
Input Enhancement
Comprehensible input
Provision of inputs with high frequency
Immersion
Authentic environment
Direct form-meaning
mapping
Noticing & Attention
Output hypothesis test
Corrective feedback
Affective factors
Motivation
Goal achievement & rewards
Interest
Importance of L2Slide8
Dialog-Based CALL (DB-CALL)
<Educational Robot>
<3D Educational Game>
Spoken Dialog System
DB-CALL SystemSlide9
Existing DB-CALL Systems
Alelo
Tactical language & culture training system
Learn Iraqi Arabic by playing a fun video game
Dedicated to serving langauge and culture learning needs of military
SPELL
Learning English in functional situations such as
going to a restaurant, expressing (dis-)likes, etc.
The speech recogniser is programmed to recognise
grammatical and some ungrammatical utterances
DEAL
Learning Dutch in a flea market situation
The model can also convey extra linguistic
signs such as lip-synching, frowning, nodding,
and eyebrow movementsSlide10
Video DemoSlide11
Spoken dialog systeMsCHAPTER 2Slide12
SPOKEN DIALOG SYSTEM (SDS)Slide13
Tele-service
Car-navigation
Home networking
Robot interface
SDS APPLICATIONSSlide14
Automatic Speech Recognition (ASR)FeatureExtraction
DecodingAcousticModel
PronunciationModel
LanguageModel
버스 정류장이
어디에 있나요
?Speech Signals
Word Sequence
버스 정류장이
어디에 있나요
?
Network
Construction
Speech
DB
Text
Corpora
HMM
Estimation
G2P
LM
EstimationSlide15
15Spoken Language Understanding (SLU)
Dialog ActIdentification
Frame-SlotExtraction
Relation
ExtractionUnificationFeature Extraction / Selection
Info.
Source
+
+
+
+
+
Overall architecture for semantic analyzer
I like DisneyWorld.
Domain: Chat
Dialog Act: Statement
Main Action: Like
Object.Location=DisneyWorld
Examples of semantic frame structure
Semantic Frame Extraction
(~
Information Extraction
Approach
)
Dialog act / Main action Identification
~
Classification
Frame-Slot Object Extraction ~
Named Entity Recognition
Object-Attribute Attachment ~
Relation Extraction
How to get to
DisneyWorld
?
Domain: Navigation
Dialog Act: WH-question
Main Action: Search
Object.Location.Destination
=
DisneyWorldSlide16
Named Entity ↔ Dialog ActJOINT APPROACH
[Jeong and Lee, SLT2006][Jeong and Lee, IEEE TASLP2008]Slide17
HDP-HMM for Unsupervised Dialog Actsβ ~ GEM(
α), ω
~ Dir(ω0)
for each hidden state k ∈ [1,2,…
] πk ~ DP(α',
β)
ϕk
~ Dir(
ϕ
0
),
θ
k
~ Dir(
θ
0
)
for
each dialog
d
λ
d
~
Beta(
λ
0
)
for time stamp t zt
~ Multi(π
z
t-)
for each entity
e
ei ~ Multi(θ
zt) for
each word w
x
i ~
Bern(λ
d
)
[select word type]
if x
i = 0: w
i
~ Multi(
ϕ
z
t
)
else
w
i
~ Multi(
ω
)
[background LM]
Generative StorySlide18
CRF with Posterior Regularization for unsupervised NERConstraints for NERConstraints Learning
Welcome to the New York City Bus Tour Center .I want to buy tickets for me and my child .What kind of tour would you like to take ?We would like to go on a tour during the day .We have two daytime tours: the Downtown Tour and the All Around Town Tour .Which tour goes to the Statue of Liberty ?…
BOARD_TYPE:Hop-onBOARD_TYPE:Hop-offPLACE:Times SquarePLACE:Empire State Building
PLACE:ChinatownPLACE:Site of the World Trade CenterPLACE:Statue of LibertyPLACE:Rockefeller CenterPLACE:Central Park
…HeuristicMatching
DICT/DB/Web
UNLABELDCORPUS
# We would like to go on a tour during the day . # -> null
0:1.000:We would like to go on a tour during the day .
# We have two daytime tours # -> the Downtown Tour and the All Around Town Tour .
0:1.000:We have two daytime tours
# Which tour goes to the Statue of Liberty ? # -> null
0:1.000:Which tour goes to the <PLACE>Statue of Liberty</PLACE> ?
# You can visit the Statue of Liberty on either tour . # -> null
0:1.000:You can visit the <PLACE>Statue of Liberty</PLACE> on either tour .
…
HYPOTHESIS
Welcome O:1.000
W1=<s> O:0.997 PLACE-b:0.001 TOURS-b:0.002 GUIDE-b:0.001
W2=<s>,Welcome O:1.000
W3=_ O:0.997 PLACE-b:0.001 TOURS-b:0.002 GUIDE-b:0.001
W4=_ O:0.997 PLACE-b:0.001 TOURS-b:0.002 GUIDE-b:0.001
W5=_ O:0.997 PLACE-b:0.001 TOURS-b:0.002 GUIDE-b:0.001
W6=to O:1.000
W7=
Welcome,to
O:1.000
W8=the O:0.924 PLACE-b:0.005 PLACE-i:0.006 TOURS-b:0.001 TOURS-i:0.064
W9=
Welcome,the O:1.000 …
LABELEDFEATURES
ExtractFeatures
CRF
Model with PRSlide19
Vanilla EXAMPLE-BASED DM (EBDM)Example-based approachesDialog State Space
Domain = Building_Guidance
Dialog Act = WH-QUESTION
Main Goal = SEARCH-LOC
ROOM-TYPE=1
(filled)
, ROOM-NAME=0
(unfilled
)
LOC-FLOOR=0
, PER-NAME=0, PER-TITLE=0
Previous Dialog Act = <s>, Previous Main Goal = <s
>
Discourse History Vector = [1,0,0,0,0]
Lexico-semantic Pattern =
ROOM_TYPE
이 어디 지
?
System Action = inform(Floor)
Dialog Corpus
USER:
회의 실 이 어디 지
?
[Dialog Act = WH-QUESTION]
[Main Goal = SEARCH-LOC]
[ROOM-TYPE =
회의실
]
SYSTEM: 3
층에 교수회의실
, 2
층에 대회의실
,
소회의실이 있습니다
.
[System Action = inform(Floor)]
Turn #1 (Domain=Building_Guidance)
Dialog Example
Indexed by using semantic & discourse features
Having the similar state
[Lee et al., SPECOM2009]Slide20
Error handling and N-best supportTo increase the robustness of EBDM with prior knowledge
1) Error Handling
If the system knows what the user will do nextDynamic Help Generation
LOCATION
OFFICE PHONE NUMBER
ROOM ROLE
GUIDE
FOCUS NODE
NEXT_TASK
AgendaHelp
S: Next, you can do the subtask
1) Asking the room's role, or 2)Asking the office phone number, or 3) Selecting the desired room for navigation
.
UtterHelp
S: Next, you can say
1) “What is it?”, or 2) “What’s the phone number of [ROOM_NAME]?”, or 3) “ Let’s go there
.
[Lee et al CSL2010]Slide21
Error handling and N-best supportTo increase the robustness of EBDM with prior knowledge
2) N-best support
If the system knows which subtask will be more probable nextRescoring N-best hypotheses (
h1~h
n)LOCATION
OFFICE
PHONE NUMBER
FLOOR
ROOM
NAME
h
2
h
1
h
3
h
4
Subtask
System Utterance
System Action
LOCATION
The director’s room is Room No. 201.
Inform(RoomNumber)
N-best
User Utterances
Subtask
P(h
i
|S)
U1 (h
1
)
What are office
rooms in this building?
ROOM NAME
0.2
U2 (h
2
)
What is the floor?
FLOOR
0.4
U3 (h
3
)
Where
is it?
LOCATION
0.3
U4 (h
4
)
What is the phone number?
OFFICE
PHONE
NUMBER
0.5
(More
probable)Slide22
Misunderstanding handling by Confirmation [Kim et al SLT 2010]Slide23
The Framework of ranking-based EBDMDiscourseSimilarity
Relative Position
Scoring Module
Dialog
Examples
Dialog Act
Features
Entity Constraint
User Intention
(system intention)
RankSVM
Calculated
Scores
system Intention
(user intention)
EBDM
[Noh et al IWSDS2011]Slide24
Dialog SimulationUser Simulation for spoken dialog systems involves four essential problemsUser Intention Simulation
User Utterance SimulationASR Channel Simulation
Spoken Dialog System
Simulated Users
[Jung et al., CSL 2009] Slide25
Design Step
Annotation Step
Language
Synchronization Step
Training StepRunning Step
Semantic Structure
Dialog Structure
Knowledge
Structure
Model
SLU
Model
Dialog
Model
Knowledge
Model
ASR
Model
Corpus
SLU
Corpus
Dialog
Corpus
Knowledge
Source
Semantic
Annotator
Dialog
Annotator
Knowledge
Annotator
Dialog
Utterance Pool
Knowledge
Importer
Knowledge
Builder
DM
Trainer
SLU
Trainer
ASR
Trainer
SLU
DM
ASR
External
Component
Dialog Studio
Component
File
Dialog Studio Architecture
[Jung et al., SPECOM 2008] Slide26
humansubject
Wizard
User speech
mic
speaker
TTS
Text
input
Wizard speech
(Network RPC)
Architecture of WOZ
User Screen
Wizard Screen
NPCs
Control
User Character
C
ontrol
[Lee et al SLATE2011]Slide27
User Screen (Mission)Slide28
DBCALL: Educational error handlingCHAPTER 3Slide29
Global ErrorsGlobal errors are errors that affect overall sentence organization. They are likely to have a marked effect on comprehension. [1]
What is the purpose of your trip?It’s ... I ... purpose business
Sorry, I didn’t understand. What did you say?
You can say
“I am here on business”I am here on businessIntention: inform(trip-purpose)Slide30
Lee, S., Lee, C., Lee, J., Noh, H., & Lee, G. G. (2010). Intention-based Corrective Feedback Generation using Context-aware Model. Proceedings of International Conference on Computer Supported Education.Hybrid Model
Level 1
Data
Learner’s Utterance
Dialog Context
Model
Level 2
Utterance Model
Level N
Utterance Model
Level 2
Data
Level N
Data
Dialog State
Learner‘s Intention
Level 1
Utterance Model
Dialog
Manager
Robust to learners’ errors
Hybrid model
combining utterance-based model and dialog context-based modelSlide31
Formulating the prediction as probabilistic inference:
Chain rule
Bayes’ ruleIgnore invariants
Dialog-Context Model
Utterance ModelMaximum Entropy
Features:
Word Part of speech
Enhanced K-Nearest Neighbors
Features:
Previous
system
intention
Previous user intention
Current system intention
A list of exchanged
information
Number of database query resultsSlide32
Dialog State Space
Domain = Fruit_Store
Previous System Intention = Ask(Select_Item)Previous User Intention = Inform(Order_Fruit) System Intention = Ask(Order_Quantity)Exchanged Information State
= [ITEM_NAME = ‘orange’ (C), ITEM_QUANTITY = 3 (U)]
Number of DB query results = 0Dialog Corpus
SYSTEM: Namsu, what would you like to buy today?[Intention = Ask(Select_Item)]
USER: I’d like to buy some oranges[Intention = Inform(Order_Fruit), ITEM_NAME = orange]SYSTEM
:
How many oranges do you need?
[Intention = Ask(Order_Quantity)]
USER
:
I need three oranges
[Intention = Inform(Order_Quantity), NUM = three]
Segment #2 (Domain = Fruit Store)
Dialog
State
Indexed by using semantic & discourse features
User Intention
=
Inform(
Order_Quantity)
User Intention
Dialog-Context ModelSlide33
Recast Feedback Generation
Example
Expresssion DB
Example Search
Example Expressions
Pattern Matching
Feedback
Intention
Recognition
User’s
Utterance
>
θ
No Feedback
Y
NSlide34
What is the purpose of your trip?I am here at business
On business
I am here on businessErrorInfo: prep_sub(at/on)
Local Errors
Local errors are errors that affect single elements in a sentence. [1]
[1] Ellis., R. (2008). The Study of Second Language Acquisition. 2nd ed. Oxford: OUPSlide35
Local Error Detecter Architecture
Text
Erroneous Text
Grammatical Error
Simulation
ASR
ASR’
N-gram LM
Merged Hypotheses
Error-type
Classifier
Grammaticality
Checker
N-gram LM
Feedback
Error Patterns
Error Frequency
Lee, S., Noh, H., Lee, K., & Lee, G. G., (2011) Grammatical Error Detection for Corrective Feedback Provision in Oral Conversations, Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, San Francisco.Slide36
Two-Step ApproachData Imbalance ProblemSimply produce majority classOr, High false positive rateLarge number of error types Makes model learning and selection procedure vastly complicatedGrammaticality checking itself can be useful for some ApplicationsCategorizing learners’ proficiency level Generating implicit corrective feedback such as repetition, elicitation, and recast feedback
I
am
here
atbusiness
0
0
0
1
0
None
None
None
PRP_LXC
None
Grammaticality Checking
Error Type Classification
Grammatical Error Detection
1)
2)Slide37
Grammaticality Checker- Feature ExtractionSlide38
Grammaticality Checker- Model LearningBinary ClassificationSupport Vector MachineModel SelectionRadial Basis KernelSearch for C, γ which optimize:Maximize F-scoreSubject to Precision > 0.90, False positive rate < 0.01
5-fold cross-validationSlide39
Error Type ClassificationError type information is useful forMeta-linguistic feedbackSophisticated learner modelSimplest wayChoose the error type associated with the top ranked error patternTwo flaws:does not have a principled way to break tied error patternsdoes not consider the error frequencyWeighting according to error frequencyScore(e) = TS(e) +
α * EF(e)Slide40
GES: Grammar Error SimulatorAutomatic Speech Recognizer
Grammatical Error Simulator
Incorrect Sentences
Correct Sentences
Error Types
<LM Adaptation & Grammatical Error Detection>Slide41
GES Application<Grammar Quiz Generation>Slide42
Markov Logic Network
subject-verb agreement errors omission errors of prepositions omission errors of articles
He want go to movie theater
Sungjin Lee, Gary Geunbae Lee. Realistic grammar error simulation using markov
logic. Proceedings of the ACL 2009, Singapore, August 2009.Sungjin Lee, Jonghoon
Lee,
Hyungjong Noh, Kyusong
Lee, Gary Geunbae Lee. (2011) Grammatical Error Simulation for Computer-Assisted Language Learning, Knowledge-Based SystemsSlide43
Grammar Error SimulationRealistic errorsEncoding characteristics of learners’ errors using the Markov logic
Over-generalization of some rules of the L2
Lack of knowledge of some rules of the L2 Applying rules and forms of the first language into the L2Slide44
Overall ProcessSlide45
NICT JLE CorpusNumber of interviews167Number of sentences of interviewees8,316Average length of sentences
15.59Nubmer of total errors15,954
<n_num crr=“x”>...</n_num>
POS(i.e. n=noun)
Grammatical system(i.e. num=number)Corrected formErroneous part
Example) I belong to two baseball <n_num crr=“teams”>team</n_num>Slide46
PESAA: POSTECH English speaking assessment & assistantCHAPTER 4Slide47
English oral proficiency assessment:International testSlide48
English oral proficiency assessment:Korean national testNational English Ability Test (NEAT)Tasks
Answering short questions (communication)Describing pictures (story telling)PresentationDescribing figures, tables, and graphsIntroducing products or events
Giving an opinion (discussion)Slide49
English oral proficiency assessment:General common tasksGiving an opinion / discussionRubrics
DeliveryPronunciationFluency (Prosody)Language useGrammarWord choice
Topic developmentOrganizationDiscourseContentsSlide50
Requirements:Real environment
Existing systems for read speech
Spontaneous speech
Text-independent input
NEATSlide51
Training data collectionSNU pronunciation/prosody 51
Speech waveform
Spectrogram/ pitch contour
Word
PLUSentence stressSlide52
For Public UseBoston University radio news corpusSpeech from FM radio news announcers424 paragraphs (30,821 words)ToBI labels (pitch accent
stress)0.48 marked stress per wordPLU set: TIMIT phonetic labeling system52Slide53
Aix-Marsec database53
Speech waveform
Spectrogram/ pitch contour
Multi-level annotationSlide54
Collecting Grammar Error Data:Picture description taskFrom English learners of KoreanStory Telling based on pictures80 Students (5 tasks for each student)Slide55
Collecting Grammar Error Data: Error tagsetsJLE Tagset
Consisting of 46 tagsSystematic tag structureSome ambiguity caused by POS specific error tag structureCLC Tagset
World-widely used tagset including 76 tagsSystematic & Taxonomic tag structureJLE issue is figured out by taxonomic tag structure
NUCLE Tagset27 error tagsQuiet arbitrary tag structure
UIUC TagsetOnly for articles and prepositionsSlide56
PESAA:
Pronuciation
Feedback
EPD
Error information
User
Forced Alignment
Comparison
Feedback Generation
Actual pronunciation
Speech input
Material
Error Detection
Error candidates
Pronouncing Simulation
ASR
Word-level transcription
Orthographic pronunciation
simulation part
recognition part
error detection & feedback partSlide57
Pronunciation Error simulation:Pronunciation Variants
[straik
][
sɨtɨraikɨ]
StrikeSlide58
Pronunciation Error simulation:Learning context rules using Generalized TBL
n
th
initial machine annotation
Collect transformations
Best transformation
List of transformations
Machine annotated data
Training input
Left-right
ngram
context
Iterative
initialization
n := n + 1
Merge transformations
Training
reference
Majority choice
/ Context
n := 0
n
th
order initialization rules
Apply
n Slide59
Pronunciation Error simulation:Multi-tag ResultExample Input
InputLet’s go shopping# L EH T S # G OW # SH AH P EH NG #
Example Output#/# L/L EH/EH T/T S/S #/# G/G OW/OW|AO #/# SH/SH AA/
AH|AA P/P IH/IH NG/NG #/##/# L/L EH/EH T/T S/S #/# G/G OW/AO
#/# SH/SH AA/AA P/P IH/EH NG/NG #/##/# L/L EH/EH T/T S/S #/# G/G OW/OW #/# SH/SH AA/AA P/P IH/EH NG/NG #/#
#/# L/L EH/EH T/T S/S #/# G/G OW/AO
#/# SH/SH AA/AH P/P IH/EH NG/NG #/##/# L/L EH/EH T/T S/S #/# G/G
OW/OW
#/# SH/SH
AA/AH
P/P IH/EH NG/NG #/#Slide60
Pronunciation Error detection/feedback
Error candidate information
Feedback preference
Error confidence
Word ASR confidence
Phoneme ASR confidence
Feedback decision
F
eedback
Feedback DBSlide61
Pronunciation Error detection/Feedback:ComponentsSlide62
PESAA: Prosody FeedbackStress & Prosodic phrasing & boundary tone62
Stress
Prosodic phrasing
Boundary tone
* Existence of word/sentence stress for each syllable/word
* Location of phrase breaks
* Type of boundary tone for each phrasal boundarySlide63
Sentence Stress Feedback:Architecture63
Alignment
Text
Text
Analysis
Speech Analysis
Sentence Stress Prediction
Model
Rule Application
Rules
Predicted
Sentence
Stress
Model
Training
Model
Sentence Stress Detection
Detected
Sentence
Stress
Feedback
Diff.
Text
Analysis
Text
Speech Signal
Model
TrainingSlide64
Sentence Stress PredictionFeature usedPosition info: the number of phonemes in word, the number of syllables in word, …Stress info: word stress, sentence stress (rule-based prediction), …Lexical info: identity of word, identity of vowel
Part-of-speech info64
NameDescription
S-basicContent
wordsU-basicFunctional wordsU-
adhoc
Unclassified FW EX LS POS
U-aux
MD special cases
U-adv
RP special cases
S-
frgn
FW foreign words
S-
vb
Last VB in multiple verbsSlide65
Sentence Stress DetectionFeature usedDuration info: duration of vowel, duration of syllable, normalized duration of word according to the number of syllables, …Intensity info: energy of vowel (+delta)F0 info: f0 of vowel (+delta)
MFCC info: mfcc of vowel (+delta, +delta-delta)Lexical info: identity of vowel65Slide66
Sentence Stress FeedbackAdopting output probabilityFeedback candidates: syllables in “predicted stress” with low or high output probability66
Predicted stress
It
may
be
the
most
im
por
tant
ap
point
ment
Detected stress
It
may
be
the
most
im
por
tant
ap
point
ment
Not stressed
StressedSlide67
Sentence Stress Feedback:Snapshot67Slide68
PESAA: Grammar FeedbackSpoken English
Written English
User Input
GE PatternsSpoken GE Simulator
GE tagged Texts/SpeechTrainingSoft Constraint
Correct Sentences
Spoken GE Detector
SVM
Training
ASR/CN
SPEECH
Written GE Detector
GE tagged Texts
Written GE
Simulator
Training
Soft Constraint
Correct Sentences
GE Patterns
SVM
Training
TEXT
GE FeedbackSlide69
Grammar Error detection:Snapshot – written inputSlide70
Grammar Error detection:Snapshot – spoken inputSlide71
Field studyCHAPTER 5Slide72
Field Study: Robot-Assisted Language Learning
Experimental Design
1
2
Cognitive Effects
Affective Effects
3
Sungjin Lee, Hyungjong Noh, Jonghoon Lee, Kyusong Lee, Gary Geunbae Lee, Seongdae Sagong, Moonsang Kim. (2011) On the Effectiveness of Robot-Assisted Language Learning, ReCALL Journal, Vol.23(1), SSCI.
Sungjin Lee, Changgu Kim, Jonghoon Lee, Hyungjong Noh, Kyusong Lee, Gary Geunbae Lee.Affective Effects of Speech-enabled Robots for Language Learning. Proceedings of the 2010 IEEE Workshop on Spoken Language Technology (SLT 2010), Berkeley, December 2010
Sungjin Lee, Hyungjong Noh, Jonghoon Lee, Kyusong Lee, Gary Geunbae Lee. Cognitive Effects of Robot-Assisted Language Learning on Oral Skills. Proceedings of Interspeech Second Language Studies Workshop, Tokyo, Sep 2010.Slide73
HRI TechnologySlide74
HRI Experimental DesignSetting and participants24 elementary studentsRanging in age over 9-13Divided into two groups (beginner, intermediate)
Material and treatment68 lessons17 lessons for each level and themeSimple to complex task
2 hours a week extended over 8 weeksSlide75
HRI Experimental Design
1) PC room
2) Pronunciation
training room
3) Fruit and Vegetable
store
4) Stationery
storeSlide76
Evaluation of Cognitive EffectsData collection and analysisEvaluation methodPre-test/Post-test
For the listening skills15 items for multiple choice questionCronbach’s alphapre-test: 0.87, post-test: 0.66For the speaking skills
10 items for 1-on-1 interviewCronbach’s alphapre-test: 0.93, post-test: 0.99Slide77
<Cognitive effects on oral skills for overall students>
Experiment Result*p < .05Slide78
Evaluation of Affective Factors
Data collectionQuestionnaire (4 point scale without a neutral option)Data analysisFor satisfaction in using robots
Descriptive statisticsFor interest in learning English, Confidence with English, Motivation for learning EnglishPre-/Post-test
Affective Factor
N ƗR ƗƗ
Satisfaction in using robots
10
0.73
Interest in learning English
16
0.93(0.96)
Confidence with English
12
0.91(0.90)
Motivation for learning English
14
0.91(0.83)
N
Ɨ
= Number of questions,
R
ƗƗ
= Cronbach’s alpha in the form of pre-test(post-test)Slide79
Effects on Affective FactorsSlide80
Thank you