/
Chapter 8 Categorical Data Analysis Chapter 8 Categorical Data Analysis

Chapter 8 Categorical Data Analysis - PowerPoint Presentation

zoe
zoe . @zoe
Follow
0 views
Uploaded On 2024-03-13

Chapter 8 Categorical Data Analysis - PPT Presentation

Inference for a Single Proportion p Goal Estimate proportion of individuals in a population with a certain characteristic p This is equivalent to estimating a binomial probability Sample Take a SRS of ID: 1047848

ams test interval sample test ams sample interval obs variable population probability chi explanatory levels cell association count variables

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Chapter 8 Categorical Data Analysis" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1. Chapter 8Categorical Data Analysis

2. Inference for a Single Proportion (p)Goal: Estimate proportion of individuals in a population with a certain characteristic (p). This is equivalent to estimating a binomial probabilitySample: Take a SRS of n individuals from the population and observe y that have the characteristic. The sample proportion is y/n and has the following sampling properties:

3. Large-Sample Confidence Interval for p Take SRS of size n from population where p is true (unknown) proportion of successes. Observe y successesSet confidence level (1-a) and obtain za/2 from z-table

4. Example - Ginkgo and Azet for AMSStudy Goal: Measure effect of Ginkgo and Acetazolamide on occurrence of Acute Mountain Sickness (AMS) in Himalayan TrackersParameter: p = True proportion of all trekkers receiving Ginkgo&Acetaz who would suffer from AMS.Sample Data: n=126 trekkers received G&A, y=18 suffered from AMS

5. Sample Size for Margin of Error = EGoal: Estimate p within E with 100(1-a)% ConfidenceConfidence Interval will have width of 2E

6. Wilson-Agresti-Coull MethodFor moderate to small sample sizes, large-sample methods may not work well wrt coverage probabilitiesSimple approach that works well in practice:Adjust observed number of Successes (y) and sample size (n)

7. Example: Lister’s Tests with AntisepticExperiments with antiseptic in patients with upper limb amputations (John Lister, circa 1870)n=12 patients received antiseptic y=1 died

8. Significance Test for a ProportionGoal test whether a proportion (p) equals some null value p0 H0: p=p0Large-sample test works well when np0 and n(1-p0)  5

9. Ginkgo and Acetazolamide for AMSCan we claim that the incidence rate of AMS is less than 25% for trekkers receiving G&A?H0: p=0.25 Ha: p < 0.25Strong evidence that incidence rate is below 25% (p < 0.25)

10. R Code/Outputy <- 18; n <- 126binom.test(y, n, p=0.25, alternative="less")> binom.test(y, n, p=0.25, alternative="less") Exact binomial testdata: y and nnumber of successes = 18, number of trials = 126, p-value = 0.002465alternative hypothesis: true probability of success is less than 0.2595 percent confidence interval: 0.0000000 0.2044495sample estimates:probability of success 0.1428571 The 95% Confidence Interval is 1-sided as the alternative is “less” than the null value.

11. Multinomial Experiment / DistributionExtension of Binomial Distribution to experiments where each trial can end in exactly one of k categoriesn independent trialsProbability a trial results in category i is pi ni is the number of trials resulting in category ip1+…+pk = 1 n1+…+nk = n

12. Multinomial Distribution / Test for Cell Probabilities

13. Example – English Premier League -2013Home Team Games can end in Win, Draw, Lose (k = 3)Season: n = 380 games (All 20 teams play Home/Away)Test H0: pW = pL = 0.40, pD = 0.20Data: nW = 179, nD = 78, nL = 123

14. English Premier League -2013 – R Code#### Multinomial Goodness of Fit Test## Give countsgame.count <- c(179, 78, 123)## Give null values for probabilitiesprob.null <- c(0.40, 0.20, 0.40)## Use chisq.test function for the testchisq.test(game.count, p=prob.null)> chisq.test(game.count, p=prob.null) Chi-squared test for given probabilitiesdata: game.countX-squared = 10.382, df = 2, p-value = 0.005568

15. Goodness of Fit Test for a Probability DistributionData are collected and wish to be determined whether it comes from a specific probability distribution (e.g. Poisson, Normal, Gamma)Estimate any unknown model parameters (p estimates) Break down the range of data values into k > p intervals (typically where ≥ 80% have expected counts ≥ 5) obtain observed (n) and expected (E) values for each interval

16. Example – Goals in 2013 Brazil SoccerLeague has 20 teams, each team plays other 19 teams twiceGames are 90 minutes, with no overtimeMean and variance of the total goals in a game are 2.46 and 2.61 respectively For Poisson distribution, the theoretical mean and variance are the same. For this empirical data, they are close

17. Comparing Two Population ProportionsGoal: Compare two populations/treatments wrt a nominal (binary) outcomeSampling Design: Independent vs Dependent SamplesMethods based on large vs small samplesContingency tables used to summarize dataMeasures of Association: Absolute Risk, Relative Risk, Odds Ratio

18. Contingency TablesTables representing all combinations of levels of explanatory and response variablesNumbers in table represent Counts of the number of cases in each cellRow and column totals are called Marginal counts

19. 2x2 Tables - Notationn1+n2(n1+n2)-(y1+y2)y1+y2OutcomeTotaln2n2-y2y2Group 2n1n1-y1y1Group 1GroupTotalOutcomeAbsentOutcomePresent

20. Example - Firm Type/Product Quality17213438OutcomeTotal84795VerticallyIntegrated885533NotIntegratedGroupTotalLowQualityHighQuality Groups: Not Integrated (Weave only) vs Vertically integrated (Spin and Weave) Cotton Textile Producers Outcomes: High Quality (High Count) vs Low Quality (Count)Source: P. Temin (1988). “Product Quality and Vertical Integration in the Early Cotton Textile Industry,” Journal of Economic History, Vol. 48, #4, pp. 891-907.

21. NotationProportion in Population 1 with the characteristic of interest: p1 Sample size from Population 1: n1Number of individuals in Sample 1 with the characteristic of interest: y1 Sample proportion from Sample 1 with the characteristic of interest: Similar notation for Population/Sample 2

22. Example - Cotton Textile Producersp1 - True proportion of all Non-integretated firms that would produce High qualityp2 - True proportion of all vertically integretated firms that would produce High quality

23. Notation (Continued)Parameter of Primary Interest: p1-p2, the difference in the 2 population proportions with the characteristic (two other measures given below)Estimator: Standard Error (and its estimator):Pooled Estimated Standard Error when p1=p2=p:

24. Cotton Textile Producers (Continued)Parameter of Primary Interest: p1-p2, the difference in the 2 population proportions that produce High quality outputEstimator: Standard Error (and its estimate):Pooled Estimated Standard Error when p1=p2=p:

25. Significance Tests for p1-p2Testing whether p1=p2 can be done by interpreting “plausible values” of p1-p2 from the confidence interval:If entire interval is positive, conclude p1 > p2 (p1-p2 > 0)If entire interval is negative, conclude p1 < p2 (p1-p2 < 0) If interval contains 0, do not conclude that p1  p2 Alternatively, we can conduct a significance test:H0: p1 = p2 Ha: p1  p2 (2-sided) HA: p1 > p2 (1-sided)Test Statistic: RR: |zobs|  za/2 (2-sided) zobs  za (1-sided) P-value: 2P(Z|zobs|) (2-sided) P(Z zobs) (1-sided)

26. Example - Cotton Textile ProductionStrong evidence of differences in quality by firm type

27. R Code and Outputy1 <- 33; n1 <- 88y2 <- 5; n2 <- 84prop.test(c(y1,y2), c(n1,n2), correct=F)> prop.test(c(y1,y2), c(n1,n2), correct=F) 2-sample test for equality of proportions without continuity correctiondata: c(y1, y2) out of c(n1, n2)X-squared = 24.851, df = 1, p-value = 6.195e-07alternative hypothesis: two.sided95 percent confidence interval: 0.2023778 0.4285746sample estimates: prop 1 prop 2 0.37500000 0.05952381

28. Measures of AssociationAbsolute Risk (AR): p1-p2 Relative Risk (RR): p1 / p2Odds Ratio (OR): o1 / o2 (o = p/(1-p))Note that if p1 = p2 (No association between outcome and grouping variables):AR=0RR=1OR=1

29. Relative RiskRatio of the probability that the outcome characteristic is present for one group, relative to the otherSample proportions with characteristic from groups 1 and 2:

30. Relative RiskEstimated Relative Risk: 95% Confidence Interval for Population Relative Risk:

31. Relative RiskInterpretationConclude that the probability that the outcome is present is higher (in the population) for group 1 if the entire interval is above 1Conclude that the probability that the outcome is present is lower (in the population) for group 1 if the entire interval is below 1Do not conclude that the probability of the outcome differs for the two groups if the interval contains 1

32. Example - Concussions in NCAA AthletesUnits: Game exposures among college socer players 1997-1999Outcome: Presence/Absence of a ConcussionGroup Variable: Gender (Female vs Male)Contingency Table of case outcomes:Source: Covassin, et al (2003). “Sex Differences and the Incidence of Concussions Among Collegiate Athletes,” Journal of Athletic Training, Vol. 38, #3, pp. 238-244

33. Example - Concussions in NCAA AthletesThere is strong evidence that females have a higher risk of concussion

34. Odds RatioOdds of an event is the probability it occurs divided by the probability it does not occurOdds ratio is the odds of the event for group 1 divided by the odds of the event for group 2Sample odds of the outcome for each group:

35. Odds Ratio Estimated Odds Ratio:95% Confidence Interval for Population Odds Ratio

36. Odds RatioInterpretationConclude that the probability that the outcome is present is higher (in the population) for group 1 if the entire interval is above 1Conclude that the probability that the outcome is present is lower (in the population) for group 1 if the entire interval is below 1Do not conclude that the probability of the outcome differs for the two groups if the interval contains 1

37. Osteoarthritis in Former Soccer PlayersUnits: 68 Former British professional football players and 136 age/sex matched controlsOutcome: Presence/Absence of Osteoathritis (OA)Data:Of n1= 68 former professionals, y1 =9 had OA, n1-y1=59 did notOf n2= 136 controls, y2 =2 had OA, n2-y2=134 did notSource: Shepard, et al (2003).” Ex-professional association footballers have an increased prevalence of osteoarthritis of the hip compared with age matched controls despite not having sustained notable hip injuries,” Brit. J. Sports Med., Vol. 37, #1, pp. 80-81Interval > 1

38. Fisher’s Exact TestMethod of testing for testing whether p2=p1 when one or both of the group sample sizes is smallMeasures (conditional on the group sizes and number of cases with and without the characteristic) the chances we would see differences of this magnitude or larger in the sample proportions, if there were no differences in the populations

39. Example – Echinacea Purpurea for ColdsHealthy adults randomized to receive EP (n1=24) or placebo (n2=22, two were dropped)Among EP subjects, 14 of 24 developed cold after exposure to RV-39 (58%)Among Placebo subjects, 18 of 22 developed cold after exposure to RV-39 (82%)Out of a total of 46 subjects, 32 developed coldOut of a total of 46 subjects, 24 received EP Source: S.J. Sperber, et al (2004), “Echinacea Purpurea for Prevention of Experimental Rhinovirus Colds,” Clinical Infectious Diseases, Vol. 38, #10, pp. 1367-1371.

40. Example – Echinacea Purpurea for ColdsConditional on 32 people developing colds and 24 receiving EP and 22 receiving placebo, the following table gives the outcomes that would have been as strong or stronger evidence that EP reduced risk of developing cold (1-sided test). P-value from R is .079 (next slide).

41. R Code/Outputep.cold <- matrix(c(10,4, 14,18), ncol=2)fisher.test(ep.cold, alt="greater")fisher.test(ep.cold, alt="two.sided")> fisher.test(ep.cold, alt="greater") Fisher's Exact Test for Count Datadata: ep.coldp-value = 0.07867alternative hypothesis: true odds ratio is greater than 195 percent confidence interval: 0.8653928 Infsample estimates:odds ratio 3.132591

42. McNemar’s Test for Paired SamplesCommon subjects (or matched pairs) being observed under 2 conditions (2 treatments, before/after, 2 diagnostic tests) in a crossover settingTwo possible outcomes (Presence/Absence of Characteristic) on each measurementFour possibilities for each subject/pair wrt outcome:Present in both conditionsAbsent in both conditionsPresent in Condition 1, Absent in Condition 2Absent in Condition 1, Present in Condition 2

43. McNemar’s Test for Paired Samples

44. McNemar’s Test for Paired SamplesData: n12 = # of pairs where the characteristic is present in condition 1 and not 2 and n21 # where present in 2 and not 1H0: Probability the outcome is Present is same for the 2 conditions (p1 = p2) HA: Probabilities differ for the 2 conditions (p1 ≠ p2)

45. Example - Reporting of Silicone Breast Implant Leakage in Revision SurgerySubjects - 165 women having revision surgery involving silicone gel breast implantsConditions (Each being observed on all women)Self Report of Presence/Absence of Rupture/LeakSurgical Record of Presence/Absence of Rupture/LeakSource: Brown and Pennello (2002), “Replacement Surgery and Silicone Gel Breast Implant Rupture”, Journal of Women’s Health & Gender-Based Medicine, Vol. 11, pp 255-264

46. Example - Reporting of Silicone Breast Implant Leakage in Revision SurgeryH0: Tendency to report ruptures/leaks is the same for self reports and surgical recordsHA: Tendencies differ

47. R Code and Outputrupture <- matrix(c(69,5, 28,63), ncol=2)mcnemar.test(rupture, correct=F)> mcnemar.test(rupture, correct=F) McNemar's Chi-squared testdata: ruptureMcNemar's chi-squared = 16.03, df = 1, p-value = 6.234e-05Note that the mcnemar.test function reports z2 which is chi-square with 1 degree of freedom (thus it is equivalent to the z-test).

48. Mantel-Haenszel Test / CI for Multiple TablesData collected from q studies or strata in 2x2 contingency tables with common groupings/outcomesEach table has 4 cells: nh11, nh12, nh21, nh21 h=1,…,qThey can be combined for an overall Chi-square statistic or odds ratio and confidence Interval

49. Mantel-Haenszel Computations

50. Associations Between Categorical VariablesCase where both explanatory (independent) variable and response (dependent) variable are qualitativeAssociation: The distributions of responses differ among the levels of the explanatory variable (e.g. Party affiliation by gender)

51. Contingency TablesCross-tabulations of frequency counts where the rows (typically) represent the levels of the explanatory variable and the columns represent the levels of the response variable. Numbers within the table represent the numbers of individuals falling in the corresponding combination of levels of the two variablesRow and column totals are called the marginal distributions for the two variables

52. Example – Acute Mountain Sickness in HikersExplanatory Variable: Treatment (Placebo, Acetazolamide, Ginkgo, Acetazolamide/Ginkgo)Response: Presence/Absence of Occurrence of Acute Mountain Sickness in Himalayan TrekkersUnits: n = 487 HikersHikers randomly assigned to treatment conditionSource: J.H. Gertsch, B. Basnyat, E.W. Johnson, J. Onopa, and P.S. Holck (2004). "Randomized, Double-Blind Placebo Controlled Comparison of Ginkgo Biloba and Acetazolamide for Prevention of Acute Mountain Sickness Among Himalayan Trekkers: the Prevention of High Altitude Illness Trial", BMJ, 328: pp 797-

53. Example – Acute Mountain Sickness in HikersFor each treatment (row) we can compute the percentage of hikers in the AMS presence/absence conditions, the conditional distribution. Of the 119 hikers in the Placebo condition, 40 suffered from AMS, a proportion of 40/119 = 0.3361, or 33.61% as a percentage.

54. Guidelines for Contingency TablesCompute percentages for the response (column) variable within the categories of the explanatory (row) variable. Note that in journal articles, rows and columns may be interchanged.Divide the cell totals by the row (explanatory category) total and multiply by 100 to obtain a percent, the row percents will add to 100Give title and clearly define variables and categories.Include row (explanatory) total sample sizes

55. Independence & DependenceStatistically Independent: Population conditional distributions of one variable are the same across all levels of the other variableStatistically Dependent: Conditional Distributions are not all equalWhen testing, researchers typically wish to demonstrate dependence (alternative hypothesis), and wish to refute independence (null hypothesis)

56. Pearson’s Chi-Square TestCan be used for nominal or ordinal explanatory and response variablesVariables can have any number of distinct levelsTests whether the distribution of the response variable is the same for each level of the explanatory variable (H0: No association between the variables)r = # of levels of explanatory variablec = # of levels of response variable

57. Pearson’s Chi-Square TestIntuition behind test statisticObtain marginal distribution of outcomes for the response variableApply this common distribution to all levels of the explanatory variable, by multiplying each proportion by the corresponding sample sizeMeasure the difference between actual cell counts and the expected cell counts in the previous step

58. Pearson’s Chi-Square TestNotation to obtain test statisticRows represent explanatory variable (r levels)Cols represent response variable (c levels)n..n.c…n.2n.1Totalnr.nrc…nr2 nr1 r………………n2. n2c …n22 n212n1.n1c …n12 n111Totalc…21

59. Pearson’s Chi-Square TestObserved frequency (nij): The number of individuals falling in a particular cellExpected frequency (Eij): The number we would expect in that cell, given the sample sizes observed in study and the assumption of independence. Computed by multiplying the row total and the column total, and dividing by the overall sample size. Applies the overall marginal probability of the response category to the sample size of explanatory category

60. Pearson’s Chi-Square TestLarge-sample test (at least 80% of Eij ≥ 5)H0: Variables are statistically independent (No association between variables)Ha: Variables are statistically dependent (Association exists between variables)Test Statistic:P-value: Area above in the chi-squared distribution with (r-1)(c-1) degrees of freedom.

61. Example – Acute Mountain Sickness in HikersNote that overall: (115/487)100%=23.61% of all hikers suffered from AMS. If we apply that percentage to the 119 that received Placebo, we would expect (0.2361)(119)=28.10 to have occurred in the first cell of the table. The full table of Eij: Observed Cell Counts (nij):

62. Example – Acute Mountain Sickness in HikersComputation of

63. Example – Acute Mountain Sickness in HikersH0: Incidence of AMS is independent of treatment conditionHa: Incidence of AMS differs by treatment conditionTest Statistic:RR:P-value:

64. Likelihood Ratio Statistic

65. R Code – chisq.test Function## Set up a matrix of observed counts with 4 rows ## (trts), 2 columns (outcomes)## Default is to enter data by columns (AMS first, then No AMS)ams.obs <- matrix(c(40, 14, 43, 18, 79, 104, 81, 108), ncol=2)## Use chisq.test function on matrix of observed countsams.X2 <- chisq.test(ams.obs, correct=F)ams.X2cbind(ams.obs, ams.X2$expected) # Print n’s and E’s> ams.X2 Pearson's Chi-squared testdata: ams.obsX-squared = 30.12, df = 3, p-value = 1.302e-06> cbind(ams.obs, ams.X2$expected) [,1] [,2] [,3] [,4][1,] 40 79 28.10062 90.89938[2,] 14 104 27.86448 90.13552[3,] 43 81 29.28131 94.71869[4,] 18 108 29.75359 96.24641

66. Misuses of chi-squared TestExpected frequencies too small (at least 80% of expected counts should be at least 5, not necessary for the observed counts)Dependent samples (the same individuals are in each row, see McNemar’s test)Can be used for nominal or ordinal variables, but more powerful methods exist when both variables are ordinal and a directional association is hypothesized

67. Residual AnalysisOnce dependence has been determined from a chi-square test, often interested in determining which cells contributedResidual: nij - Eij measures the difference between the observed and expected counts Positive implies observed more than expectedResidual’s practical importance depends on level of EijAdjusted Residual (computed for each cell):Adjusted residuals above about 3 in absolute value give strong evidence against independence in that cell (These are like “z-statistics”)

68. Example – Acute Mountain Sickness in HikersAdjusted residuals are computed in the following table. Row proportion for Placebo: 119/487 = 0.2444Column Proportion for AMS is: 115/487 = 0.2361All adjusted residual are close to or above 3 in absolute value.When Acetazolamide is taken, large negative residuals for AMS and large positive for No AMS. Opposite for when Acet not taken

69. R Code/Outputams.obs <- matrix(c(40, 14, 43, 18, 79, 104, 81, 108), ncol=2)## Use chisq.test function on matrix of observed countsams.X2 <- chisq.test(ams.obs, correct=F)ams.X2ams.X2$stdres> ams.X2$stdres [,1] [,2][1,] 2.954610 -2.954610[2,] -3.452410 3.452410[3,] 3.359862 -3.359862[4,] -2.863550 2.863550

70. Ordinal Explanatory and Response VariablesPearson’s Chi-square test can be used to test associations among ordinal variables, but more powerful methods existWhen theories exist that the association is directional (positive or negative), measures exist to describe and test for these specific alternatives from independence: GammaKendall’s tb

71. Concordant and Discordant PairsConcordant Pairs - Pairs of individuals where one individual scores “higher” on both ordered variables than the other individualDiscordant Pairs - Pairs of individuals where one individual scores “higher” on one ordered variable and the other individual scores “higher” on the otherC = # Concordant Pairs D = # Discordant PairsUnder Positive association, expect C > DUnder Negative association, expect C < DUnder No association, expect C  D

72. Measures of Association Goodman and Kruskal’s Gamma: Kendall’s tb:When there’s no association between the ordinal variables, the population based values of these measures are 0. Statistical software packages provide these tests and CI’s.

73. Example – Language Lateralization and HandednessLanguage Lateralization (Strong Left, Moderate Left, Bilateral, Moderate Right, Strong Right)Handedness (Strong Left, Moderate Left, Mixed, Moderate Right, Strong Right)Concordant Pairs - Pairs of subjects where one scores higher on both language lateralization and handedness than the otherDiscordant Pairs - Pairs of subjects where one scores higher on language lateralization and the other scores higher on handednessSource: M. Somers, et al. (2015). “On the Relationship Between Degree of Hand Preference and Degree of Language Lateralization,” Brain & Language, Vol. 144, pp. 10-15.

74. Concordant Pairs: Beginning in bottom left cell, each individual in a given cell is concordant with each individual in cells “Northeast” of theirsDiscordant Pairs: Beginning in top left cell, each individual in a given cell is discordant with each individual in cells “Southeast” of theirsExample – Language Lateralization and Handedness

75. Example – Language Lateralization and Handedness

76. R Code## Set up matrix of counts (Rows in reverse order from EXCEL)langhand.obs <- matrix(c(0,7,10,80,23, 0,3,8,33,6, 0,1,6,13,3, 1,4,10,34,9, 2,8,17,23,7), ncol=5)install.packages("vcdExtra")library(vcdExtra)GKgamma(langhand.obs)## "String-out" matrix into n = 308 value of lang and handn.tot <- sum(langhand.obs)lang <- rep(0, n.tot)hand <- rep(0, n.tot)n.count <- 0for (i1 in 1:nrow(langhand.obs)) { for (i2 in 1:ncol(langhand.obs)) { lang[(n.count+1):(n.count+langhand.obs[i1,i2])] <- i1 hand[(n.count+1):(n.count+langhand.obs[i1,i2])] <- i2 n.count <- n.count+langhand.obs[i1,i2] }}cor.test(lang, hand, method = "kendall")

77. R Output> GKgamma(langhand.obs)gamma : -0.279 std. error : 0.072 CI : -0.42 -0.137> cor.test(lang, hand, method = "kendall") Kendall's rank correlation taudata: lang and handz = -3.8779, p-value = 0.0001054alternative hypothesis: true tau is not equal to 0sample estimates: tau -0.1889155

78. Inter-Rater Agreement – Cohen’s KappaTwo Raters rate the same items, typically on an ordinal scaleGoal: Measure Strength of their agreement above “chance”

79. Agreement Among Movie ReviewersReviews by Gene Siskel and Roger Ebert (160 movies between April, 1995 through September 1996)A. Agresti and L. Winner (1997). “Evaluating Agreement and Disagreement Among Movie Reviewers,” Chance, Vol. 10, #2, pp. 10—14.

80. R Code/Outputsiskel.ebert <- matrix(c(24,8,10, 8,13,9, 13,11,64), ncol=3)install.packages("psych")library(psych)cohen.kappa(siskel.ebert)> cohen.kappa(siskel.ebert)Call: cohen.kappa1(x = x, w = w, n.obs = n.obs, alpha = alpha, levels = levels)Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upperunweighted kappa 0.27 0.39 0.51weighted kappa 0.32 0.46 0.60 Number of subjects = 160