/
Towards a Game-Theoretic Framework for Information Retrieval Towards a Game-Theoretic Framework for Information Retrieval

Towards a Game-Theoretic Framework for Information Retrieval - PowerPoint Presentation

tawny-fly
tawny-fly . @tawny-fly
Follow
370 views
Uploaded On 2018-03-08

Towards a Game-Theoretic Framework for Information Retrieval - PPT Presentation

ChengXiang Cheng Zhai Department of Computer Science University of Illinois at UrbanaChampaign httpwwwcsuiuceduhomesczhai Email czhaiillinoisedu 1 YahooDAIS Seminar UIUC ID: 643389

retrieval information search amp information retrieval amp search user zhai game relevance sigir query system loss optimal shen robertson

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Towards a Game-Theoretic Framework for I..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Towards a Game-Theoretic Framework for Information Retrieval

ChengXiang (“Cheng”) ZhaiDepartment of Computer ScienceUniversity of Illinois at Urbana-Champaignhttp://www.cs.uiuc.edu/homes/czhai Email: czhai@illinois.edu

1

Yahoo!-DAIS Seminar, UIUC

,

Jan

23,

2015Slide2

Search is everywhere, and part of everyone’s life

Web Search

Desk Search

Site Search

Enterprise Search

Social Media Search

… …

2Slide3

Search accuracy matters!

Sources:

Google:

http://www.statisticbrain.com/google-searches

/

Twitter: http://www.statisticbrain.com/twitter-statistics/ PubMed: http://www.nlm.nih.gov/services/pubmed_searches.html# Queries /Day

4,700,000,000 1,600,000,000 3,000,000 ~1,300,000 hrsX 1 sec

X 10 sec ~13,000,000 hrs~440,000 hrs~4,400,000 hrs~550 hrs~5,500 hrs

… … How can we optimize all search engines in a general way? 3Slide4

However, this is an ill-defined question!

What is a search engine? What is an optimal search engine? What should be the objective function to optimize?

How can we

optimize

all

search engines in a general way? 4Slide5

Current-generation search engines

Document collection

k

number of queries search engines

Query

Q

Ranked

list

Retrieval ModelMinimum NLP Machine LearningDScore(Q,D) Retrieval task = rank documents for a queryInterface = ranked list ( “10 blue links”)Optimal Search Engine=optimal score(q,d

)Objective = ranking accuracy on training data5Slide6

Current search engines are well justified

Probability ranking principle [Robertson 77]:returning a ranked list of documents in descending order of probability that a document is relevant to the query is the optimal strategy under two assumptions: The utility of a document (to a user) is independent of the utility of any other document A user would browse the results sequentiallyIntuition: if a user sequentially examines one doc at each time, we’d like the user to see the very best ones first

6Slide7

Success of Probability Ranking Principle

Vector Space Models: [Salton et al. 75], [Singhal et al. 96], … Classic Probabilistic Models

: [

Maron

& Kuhn 60], [Harter 75], [Robertson &

Sparck

Jones 76], [van Rijsbergen 77], [Robertson 77], [Robertson et al. 81], [Robertson & Walker 94], … Language Models: [Ponte & Croft 98], [Hiemstra & Kraaij 98], [Zhai & Lafferty 01], [Lavrenko & Croft 01], [Kurland & Lee 04], … Non-Classic Logic Models: [van Rijsbergen 86], [Wong & Yao 95], … Divergence from Randomness: [Amati & van Rijsbergen 02], [He & Ounis 05], …Learning to Rank: [Fuhr 89], [Gey 94], ... Axiomatic retrieval framework [Fang et al. 04], [Clinchant & Gaussier 10], [Fang et al. 11], …… Most information retrieval models are to optimize score(Q,D)7Slide8

Limitations of PRP

Limitations of optimizing Score(Q,D)Assumptions made by PRP don’t hold in practiceUtility of a document depends on othersUsers don’t strictly follow sequential browsing As a resultRedundancy can’t be handled (duplicated docs have the same score!)Collective relevance can’t be modeled Heuristic post-processing of search results is inevitable

8Slide9

Improvement: instead of scoring one document, score a whole ranked list

Instead of scoring an individual document, score an entire candidate ranked list of documents [Zhai 02; Zhai & Lafferty 06]A list with redundant documents on the top can be penalizedCollective relevance can be captured alsoPowerful machine learning techniques can be used [Cao et al. 07] PRP extended to address interaction of users [Fuhr 08]

However, scoring is still for just one query: score(Q, 

)

Optimal SE = optimal score(Q,

)Objective = Ranking accuracy on training data9Slide10

Limitations of single query scoring

No consideration of past queries and history No modeling of usersCan’t optimize the utility over an entire session… 10Slide11

Heuristic solutions

 emerging topics in IR No consideration of past queries and history  Implicit feedback (e.g, [Shen et al. 05] ), personalized search

(see, e.g., [

Teevan

et al. 10])

No modeling of users

 intent modeling (see, e.g. , [Shen et al. 06]), task inference (see, e.g., [Wang et al. 13])Can’t optimize the utility over an entire session  Active feedback (e.g., [Shen & Zhai 05]), exploration-exploitation tradeoff (e.g., [Agarwal et al. 09], [Karimzadehgan & Zhai 13])  POMDP for session search [Luo et al. 14]Can we solve all these problems in a more principled way with a unified formal framework? 11Slide12

Going back to the basic questions…

What is a search engine?

What is an optimal search engine?

What should be the objective function to optimize?

How can we solve such an optimization problem?

12Slide13

Proposed Solution: A Game-Theoretic Framework for IR

Retrieval process = cooperative game-playingPlayers: Player 1= search engine; Player 2= userRules of game:Each player takes turns to make “moves”User or system (in case of recommender system) makes the first move User makes the last move (usually) For each move of the user, the system makes a response move Current search engine: User’s moves= {query, click}; system’s moves = {ranked list, show doc}

Objective: multiple possibilitiessatisfying the user’s information need with minimum effort of user and minimum resource overhead of the system. Given a constant effort of a user, subject to constraints of system resources, maximize the utility of delivered information to the user

Given a fixed “budget” for system resources, and an upper bound of user effort, maximize the utility of delivered information

13Slide14

Search as a Sequential Game

User

System

A

1

: Enter a query

Which information items to present?How to present them? Ri: results (i=1, 2, 3, …)Which items to view?

A2 : View itemWhich aspects/parts of the itemto show? How?R’: Item summary/previewView more?A3 : Scroll down or click on “Back”/”Next” button

(Satisfy an information need with minimum effort)(Satisfy an information need with minimum user effort, minimum resource)14Slide15

Retrieval Task = Sequential Decision-Making

User

U

:

A

1

A2 … … At-1 At System: R1 R2 … … Rt-1Given U, C, At , and H, choosethe best Rt from all possibleresponses to AtHistory H={(Ai,Ri)} i=1, …, t-1

Info ItemCollectionCQuery=“light laptop”All possible rankings of items in CThe best ranking for the queryClick on “Next” button

All possible rankings of unseen items The best ranking of unseen items Rt  r(At)Rt =?15Slide16

Formalization based on Bayesian Decision Theory : Risk Minimization Framework

[Zhai & Lafferty 06, Shen et al. 05]

User: U

Interaction history: H

Current user action: A

t

Document collection: CObservedAll possible responses: r(At)={r1, …, rn}User ModelM=(S, U,… )

Seen itemsInformation needL(ri,At,M)Loss Function

Optimal response: r* (minimum loss) ObservedInferredBayes risk16Slide17

Approximate the Bayes risk by the loss at the mode of the posterior distribution

Two-step procedureStep 1: Compute an updated user model M* based on the currently available informationStep 2: Given M*, choose a response to minimize the loss functionA Simplified Two-Step Decision-Making Procedure

17Slide18

Optimal Interactive Retrieval

User

A

1

U

C

M*1P(M1|U,H,A1,C)

L(r,A1,M*1)R1A2L(r,A2,M*2)R2M*2

P(M2|U,H,A2,C)A3…CollectionIR system

Many possible actions:type in a query character scroll down a page click on any button … Many possible responses:query completiondisplay adaptive summariesrecommendation/advertising clarification… * M (user model) can be regarded as states in an MDP or POMDP. Thus reinforcement learning will be useful (see SIGIR’14 tutorial on dynamic IR modeling [Yang et al. 14])* Interaction can be modeled at different levels: keyboard input, result clicking , and query formulations, multisession tasks, … 18Slide19

Refinement of Risk

Minimization Frameworkr(At): decision space (At dependent)r(At) = all possible rankings of items in C

r(At) = all possible rankings of unseen

items

r(A

t

) = all possible summarization strategiesr(At) = all possible ways to diversify top-ranked items r(At) = all possible ways to mix results with query suggestions (or topic map)M: user model Essential component: U = user information needS = seen itemsn = “new topic?” (or “Never purchased such a product before”?) t = user’s task? L(Rt ,At,M): loss functionGenerally measures the utility of Rt for a user modeled as MOften encodes relevance criteria, but may also capture other preferencesCan be based on long-term gain (i.e., “winning the whole “game” of info service) P(M|U, H, At, C): user model inferenceOften involves estimating the information need U May involve inference of other variables also (e.g., task, exploratory vs. fixed item search) 19Skip for a short talkSlide20

Case 1: Context-Insensitive IR

At=“enter a query Q”r(At) = all possible rankings of docs in CM= U, unigram language model (word distribution)p(M|U,H,At,C)=p(

U |Q)

20Slide21

Optimal Ranking for Independent Loss

Decision space = {rankings}

Sequential browsing

Independent loss

Independent risk

= independent scoring

“Risk ranking principle”

[Zhai 02, Zhai & Lafferty 06]21Slide22

Case 2: Implicit Feedback

At=“enter a query Q” r(At) = all possible rankings of docs in CM= U, unigram language model (word distribution)H={previous queries} + {viewed snippets}p(M|U,H,At,C

)=p(

U

|Q,H)

22Slide23

Case 3: General Implicit Feedback

At=“enter a query Q” or “Back” button, “Next” buttonr(At) = all possible rankings of unseen docs in CM= (U, S), S= seen documents H={previous queries} + {viewed snippets}

p(M|U,H,At,C)=p(

U

|Q,H)

23Slide24

Case 4: User-Specific Result Summary

At=“enter a query Q”r(At) = {(D,)}, DC, |D|=k, {“snippet”,”overview”}M= (

U, n), n{0,1} “topic is new to the user”

p(

M|U,H,At,C

)=p(

U, n|Q,H), M*=(*, n*)n*=1n*=0i=snippet10i=overview01

Choose k most relevant docs If a new topic (n*=1), give an overview summary;otherwise, a regular snippet summary24Slide25

Case 5: Modeling Different Notions of Diversification

Redundancy reduction  reduce user effortDiverse information needs (e.g., overview, subtopic retrieval)  increase the immediate utilityActive relevance feedback  increase future utility25Slide26

Risk Minimization for Diversification

Redundancy reduction: Loss function includes a redundancy measureSpecial case: list presentation + MMR [Zhai et al. 03]Diverse information needs: loss function defined on latent topicsSpecial case: PLSA/LDA + topic retrieval [Zhai 02]Active relevance feedback: loss function considers both relevance and benefit for feedbackSpecial case: hard queries + feedback only [Shen &

Zhai 05]

26Slide27

Subtopic Retrieval [

Zhai et al. 03]

Query: What are the applications of robotics in the world today?

Find as many DIFFERENT applications as possible.

Example subtopics:

A1: spot-welding roboticsA2: controlling inventory A3: pipe-laying robotsA4: talking robotA5: robots for loading & unloading memory tapesA6: robot [telephone] operatorsA7: robot cranes… … Subtopic judgments A1 A2 A3 … ... Akd1 1 1 0 0 … 0 0d2 0 1 1 1 … 0 0d3 0 0 0 0 … 1 0….dk 1 0 1 0 ... 0 1This is a non-traditional retrieval task …27Slide28

5.1 Diversify = Remove Redundancy

“Willingness to tolerate redundancy”

C2<C3, since a redundant relevant doc is

better than a non-relevant doc

Greedy Algorithm for Ranking: Maximal Marginal Relevance (MMR)

28Slide29

5.2 Diversity = Satisfy Diverse Info. Need

[Zhai 02]Need to directly model latent aspects and then optimize results based on aspect/topic matchingReducing redundancy doesn’t ensure complete coverage of diverse aspects29Slide30

Aspect Loss Function: Illustration

Desired coverage

p(a|

Q

)

“Already covered” p(a|1)... p(a|k -1)

Combined coverage p(a|k)

New candidate

p(a|k)

non-relevantredundantperfect30Slide31

5.3 Diversify = Active Feedback [Shen &

Zhai 05]

Decision problem:

Decide subset of documents for relevance judgment

31Slide32

Independent Loss

Independent Loss

32Slide33

Independent Loss (cont.)

Uncertainty Sampling

Top K

33Slide34

Dependent Loss

Heuristics: consider relevance

first, then diversity

Gapped Top K

Select Top N documents

Cluster N docs into K clusters

K Cluster Centroid

MMR…34Slide35

Illustration of Three AF Methods

Top-K

(normal feedback)

1

2

3

4

5678910111213141516…GappedTop-K

K-cluster centroid

Experiment results show that Top-K is worse than all others [Shen & Zhai 05] 35Slide36

Suggested answers to the basic questions

Search Engine = Game System

Optimal Search Engine = Optimal Game Plan/Strategy

Objective function: based on 3 factors and at the session level

Utility of information delivered to the user

Effort needed from the user

System resource overheadHow can we solve such an optimization problem? Bayesian decision theory in general, partially observable Markov decision process (POMDP) [Luo et al. 14]Reinforcement learning ...36Slide37

Major benefits of IR as game playing

Naturally optimize performance on an entire session instead of that on a single query (optimizing the chance of winning the entire game)It optimizes the collaboration of machines and users (maximizing collective intelligence)It opens up many interesting new research directions (e.g., crowdsourcing + interactive IR)37Slide38

An interesting new problem: Crowdsourcing to users for relevance judgments collection

Assumption: Approximate relevance judgments with clickthroughsQuestion: how to optimize the exploration-exploitation tradeoff when leveraging users to collect clicks on lowly-ranked (“tail”) documents? Where to insert a candidate ?Which user should get this “assignment” and when?Potential solution must include a model for a user’s behavior38Slide39

General Research Questions Suggested by the Game-Theoretic Framework

How should we design an IR game?How to design “moves” for the user and the system? How to design the objective of the game?How to go beyond search to support access and task completion? How to formally define the optimization problem and compute the optimal strategy for the IR system?To what extent can we directly apply existing game theory? Does Nash equilibrium matter? What new challenges must be solved? How to evaluate such a system? MOOC?

39Slide40

A few specific questions

How can we support natural interaction via “explanatory feedback”?I want documents similar to this one except for not matching “X”I want documents similar to this one, but also further matching “Y”…How can we model a user’s non-topical preferences? ReadabilityFreshness…How can we perform syntactic and semantic analysis of queries?How can we generate adaptive explanatory summaries of documents?How can we generate coherent preview of search results ?How can we generate a topic map to enable users to browse freely?

40Slide41

Intelligent IR System in the Future:

Optimizing multiple games simultaneously

Game 1

Game 2

Game k

Log

IntelligentIR SystemDocumentsSupport whole workflow of a user’s task (multimodel info access, info analysis, decision support, task support)Minimize user effort (maximum relevance, natural dialogue) Minimize system resource overhead Learn to adapt & improve over time from all users/data Learning engine(MOOC)Mobile service searchMedical advisor41Slide42

Action Item: future research requires integration of multiple fields

Document

Collection

Document

Understanding

User

UnderstandingInteractive Service(Search, Browsing, Recommend…)User actionSystem responseUser

ModelDocumentRepresentationUser interaction LogExternal DocInfo (structures)External User Info (social network) Natural Language Processing Natural Language Processing Machine Learning(particularly reinforcement learning)Game Theory (Economics)Human-Computer InteractionTraditional Information Retrieval Psychology 42Slide43

References

[Salton et al. 1975] A theory of term importance in automatic text analysis. G. Salton, C.S. Yang and C. T. Yu. Journal of the American Society for Information Science, 1975. [Singhal et al. 1996] Pivoted document length normalization. A. Singhal, C. Buckley and M. Mitra. SIGIR 1996. [Maron&Kuhn 1960] On relevance, probabilistic indexing and information retrieval. M. E. Maron and J. L. Kuhns. Journal o fhte

ACM, 1960. [Harter 1975] A probabilistic approach to automatic keyword indexing. S. P. Harter. Journal of the American Society for Information Science, 1975.

[

Robertson&Sparck

Jones 1976] Relevance weighting of search terms. S. Robertson and K.

Sparck Jones. Journal of the American Society for Information Science, 1976. [van Rijsbergen 1977] A theoretical basis for the use of co-occurrence data in information retrieval. C. J. van Rijbergen. Journal of Documentation, 1977. [Robertson 1977] The probability ranking principle in IR. S. E. Robertson. Journal of Documentation, 1977. 43Note: the references are inevitably incomplete due to the breadth of the topic; if you know of any important missing references, please email me at czhai@illinois.edu.Slide44

References (cont.)

[Robertson 1981] Probabilistic models of indexing and searching. S. E. Robertson, C. J. van Rijsbergen and M. F. Porter. Information Retrieval Search, 1981. [Robertson&Walker 1994] Some simple effective approximations to the 2-Poisson model for probabilistic weighted retrieval. S. E. Robertson and S. Walker. SIGIR 1994. [Ponte&Croft 1998] A language modeling approach to information retrieval. J. Ponte and W. B. Croft. SIGIR 1998. [Hiemstra&Kraaij 1998] Twenty-one at TREC-7: ad-hoc and cross-language track. D. Hiemstra and W.

Kraaij. TREC-7. 1998. [Zhai&Lafferty 2001] A study of smoothing methods for language models applied to ad hoc information retrieval. C.

Zhai

and J. Lafferty. SIGIR 2001.

[

Lavrenko&Croft 2001] Relevance-based language models. V. Lavrenko and B. Croft. SIGIR 2001. [Kurland&Lee 2004] Corpus structure, language models, and ad hoc information retrieval. O. Kurland and L. Lee. SIGIR 2004. [van Rijsbergen 1986] A non-classical logic for information retrieval. C. J. van Rijsbergen. The Computer Journal, 1986. [Wong&Yao 1995] On modeling information retrieval with probabilistic inference. S. K. M. Wong and Y. Y. Yao. ACM Transactions on Information Systems. 1995. 44Slide45

References (cont.)

[Amati&van Rijsbergen 2002] Probabilistic models of information retrieval based on measuring the divergence from randomness. G. Amati and C. J. van Rijsbergen. ACM Transactions on Information Retrieval. 2002. [He&Ounis 2005] A study of the dirichlet priors for term frequency normalization. B. He and I. Ounis. SIGIR 2005. [Fuhr 89] Norbert

Fuhr: Optimal Polynomial Retrieval Functions Based on the Probability Ranking Principle. ACM Trans. Inf. Syst. 7(3): 183-204 (1989)

[

Gey

1994] Inferring probability of relevance using the method of logistic regression. F.

Gey. SIGIR 1994. [Fang et al. 2004] H. Fang, T. Tao, C. Zhai, A formal study of information retrieval heuristics. SIGIR 2004.[Clinchant & Gaussier 2010] Stéphane Clinchant, Éric Gaussier: Information-based models for ad hoc IR. SIGIR 2010: 234-241[Fang et al. 2011] H. Fang, T. Tao, C. Zhai, Diagnostic evaluation of information retrieval models, ACM Transactions on Information Systems, 29(2), 2011 [Zhai & Lafferty 06] ChengXiang Zhai, John D. Lafferty: A risk minimization framework for information retrieval. Inf. Process. Manage. 42(1): 31-55 (2006)[Zhai 02] ChengXiang Zhai, Risk Minimization and Language Modeling in Information Retrieval, Ph.D. thesis, Carnegie Mellon University, 2002. 45Slide46

References (cont.)

[Cao et al. 07] Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to rank: from pairwise approach to listwise approach. In Proceedings of the 24th international conference on Machine learning (ICML '07), pp.129-136, 2007[Fuhr 08] Norbert Fuhr. 2008. A probability ranking principle for interactive information retrieval. Inf. Retr. 11, 3 (June 2008), 251-265.[Shen et al. 05] Xuehua

Shen, Bin Tan, and ChengXiang Zhai

, Implicit User Modeling for Personalized Search , In

Proceedings of the 14th ACM International Conference on Information and Knowledge Management

( CIKM'05), pages 824-831.

[Zhai et al. 03] ChengXiang Zhai, William W. Cohen, and John Lafferty, Beyond Independent Relevance: Methods and Evaluation Metrics for Subtopic Retrieval , Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval ( SIGIR'03 ), pages 10-17, 2003. [Shen & Zhai 05] Xuehua Shen, ChengXiang Zhai, Active Feedback in Ad Hoc Information Retrieval, Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval ( SIGIR'05), 59-66, 2005.[Teevan et al. 10] Jaime Teevan, Susan T. Dumais, Eric Horvitz: Potential for personalization. ACM Trans. Comput.-Hum. Interact. 17(1) (2010)46Slide47

References (cont.)

[Shen et al. 06] Dou Shen, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2006. Building bridges for web query classification. In Proceedings of the 29th annual international ACM SIGIR 2006, pp. 131-138.[Wang et al. 13] Hongning Wang, Yang Song, Ming-Wei Chang, Xiaodong He, Ryen W. White, and Wei Chu. 2013. Learning to extract cross-session search tasks, WWW’ 2013. 1353-1364. [Agarwal et al. 09] Deepak Agarwal, Bee-Chung Chen, and Pradheep Elango

. 2009. Explore/Exploit Schemes for Web Content Optimization. In Proceedings of the 2009 Ninth IEEE International Conference on Data Mining (ICDM '09), 2009.

[

Karimzadehgan

&

Zhai 13] Maryam Karimzadehgan, ChengXiang Zhai. A Learning Approach to Optimizing Exploration-Exploitation Tradeoff in Relevance Feedback, Information Retrieval , 16(3), 307-330, 2013. [Luo et al. 14] J. Luo, S. Zhang, G. H. Yang, Win-Win Search: Dual-Agent Stochastic Game in Session Search. ACM SIGIR 2014. [Yang et al. 14] G. H. Yang, M. Sloan, J. Wang, Dynamic Information Retrieval Modeling, ACM SIGIR 2014 Tutorial; http://www.slideshare.net/marcCsloan/dynamic-information-retrieval-tutorial47Slide48

Thank You!

Questions/Comments?48