/
can never can never

can never - PDF document

calandra-battersby
calandra-battersby . @calandra-battersby
Follow
396 views
Uploaded On 2016-08-31

can never - PPT Presentation

value is simply a matter of irreducible taste and the correctness of their opinions cannot be objectively determined Without an objective standard Zuckerman 1999 second and simultaneously as s ID: 458032

value simply matter

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "can never" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

can neverÑor will neverÑbe evaluated against value is simply a matter of irreducible taste and the correctness of their opinions cannot be objectively determined. Without an objective standard (Zuckerman, 1999); second and simultaneously, as strategic actors, product critics may face pressures to differentiate from each other in order to reduce competition by establishing a niche or securing a unique position in the market (Deephouse, 1999; Greve, 2000Whereas financial analysts appear to be constrained in their ability to publish bold and divergent forecasts because they risk market discipline if wrong, product critics for whom information about either It is likely, however, that information intermediariesÕ assessments are shaped not only by the match between their individual tastes and the underlying attributes of the products they evaluate,but also by the opinions of other information intermediaries. If product critics are indeed biased by other product critics, research on social influence and competition suggests that they face two opposing forces: pressure to converge on the opinions others and pressure to diverge from the opinions of others. Ab Therefore, we hypothesize the following: when the assessment of one critic is observable for the other critic. On the other hand, since product critics operate as market actors, critics may want to augment the difference between their reviews and the reviews of another critic, so a critic will, when the opinion of the other critic is observable, diverge from the otherÕs opinion. Therefore, we hypothesize the following: Hypothesis 1b (H1b): The difference between the assessments of products by two critics will increase when the assessment of one critic is observable for the other critic. Because product critics in markets where value is subjective are not subject to the same ex post verification that can be the basis for market discipline against the critic, we expect that on average a product critic will generally diverge from the assessment of another critic. It is likely that product critics do not respond to social influence and competitive pressures from all other critics equally. They are more likely to be influenced by critics with whom have a higher degree of competitive overlap, including those critics that occupy similar market niches or focus on similar audiences. [Include discussion of salience from social comparison literature and strategic groups]. The increased salience of these other critics should enhance the social influence and competitive pressures and positively moderate the baseline pairs of professional critics to review products and report a numerical score reflecting their assessment. We would then manipulate whether or not a critic within each pair, prior to publishing her review, had access to the evaluations of the product by the other critic in the pair. We do not expect critics to always have the same opinions, so the differences between the reviews of the critics in each pair in instances where the critics were not able to observe the reviews of the other critic (the control condition) capture the Figures 7 and 8 demonstrate that video game review intervals at the pair-level follow the same pattern as movie reviews. Figure 7 shows review intervals for each pair of video game critics in the data. Note that, consistent with the larger variance overall in review publication dates shown in figure 3, the intervals between reviews for each pair of reviewers for a given video game exhibit higher variancethan those of film critic reviews. Figure 8 provides an example of the variation in thereview intervals for a pair of video game critics, with the left panel capturing cases where Cheat Code Central is the lead and GameSpot is the follower, and the right panel capturing cases where GameSpot is the lead and Cheat Code Central follows Data and Variables We draw our data on movie and video game reviews from Metacritic.com(Metacritic). We collected all movie reviews and video game reviews posted on Metacritic from its inception through August 22, 2013 for movies and through June 9, 2014 for video games. Metacritic does not attempt to amass the entire universe of critic reviews for each product. Instead, Metacritic collects reviews from a select group of sources that they deem to be high quality. Metacritic converts each review to a 100-point scale (the MC score).1 Metacritic includes in its database virtually every movie theatrically released in the US since the siteÕs inception in 1999 (including limited releases and re-releases, as long as there are reviews published for such movies in at least a few of their pool of publication sources) and virtually all video games commercially released in the same time period and a value of 1 if the reviews were published on different daysDyad-movie observations where reviews for both critics are published on the same day constitute our control group. Dyad-movie observations where reviews for both critics are published on different days constitute the treatment group. We also construct two variables as measures of competitive overlap at the pair level. Review Overlap is the log of the number of products reviewed by both of the critics within the pair during the 30-day period prior to the focal reviews. This measure is designed to capture the extent to which the product critics operate in the same niche. Similarly, for movie critics, we designate each publication as either national in scope or local. We then construct four dummy variables for each possible combination of national and local for the lead critic and the follower: national-national, national-local, local-national, and local-local. [This measure is designed to capture the relative status of each cr consistent with Figure 9 and Model 1A above. Journal, 43(3), 50Ð53. Waguespack, D. M., & Sorenson, O. (2011). The Ratings Game: Asymmetry in Classification. Organization Science, 22(3), 541Ð553. Waguespack, D. M., Salomon, R., & Bae, J-H (2014). Quality, Taste, and Sustained Superior No ObservationObservationMovie Review Score Difference0.01.02.03Density-100 No ObservationObservationMovie Review Score Difference-6-4-202Natl-NatlNatl-LocalLocal-NatlLocal-LocalPublication Match (Leader - Follower)No ObservationObservationMovie Review Score Difference1616.51717.5181234Reviewing Overlap (ln)No ObservationObservationMovie Review Score Divergence1616.51717.518Natl-NatlNatl-LocalLocal-NatlLocal-LocalPublication Match (Leader - Follower)No ObservationObservationMovie Review Score Divergence-2-1.5

Related Contents


Next Show more