My Math Forum Question About a 1-Tailed vs. 2-Tailed Test

 Probability and Statistics Basic Probability and Statistics Math Forum

 July 14th, 2018, 11:51 AM #1 Senior Member   Joined: Oct 2013 From: New York, USA Posts: 635 Thanks: 85 Question About a 1-Tailed vs. 2-Tailed Test An article says: "The Z test for the difference between the two proportions—33.1% and 48.0%— produced a Z score of -2.4. This result is significant at the p < .02 level, with an exact p- value of 0.016. The p-value measures the probability of a Type I error, or the risk of obtaining a false positive when testing a hypothesis, given the two sample sizes and the difference in conviction rates between the two groups. In plain language, we are more than 98% certain (1 – p) that the observed difference in conviction rates between Groups A and B is a real difference that did not occur by chance." Using the normal distribution table, the z score for -2.4 standard deviations is .00820. Therefore the p-value of .016 came from doubling and rounding .00820, which is what is done in a 2-tailed test. However, since the authors were only looking for Type I error, shouldn't it have been written as 1-tailed with a p-value of .008 (to the three decimal places the authors used)? The conclusions are still valid because a 1-tailed test would have made it less likely that the difference was caused by chance.
 July 14th, 2018, 01:08 PM #2 Senior Member   Joined: Oct 2009 Posts: 733 Thanks: 247 "In plain language, we are more than 98% certain (1 – p) that the observed difference in conviction rates between Groups A and B is a real difference that did not occur by chance." This is just horribly wrong. The authors of the article clearly have no idea how any of this works.
 July 14th, 2018, 05:12 PM #3 Senior Member   Joined: Oct 2013 From: New York, USA Posts: 635 Thanks: 85 I don't remember many statistics. What's wrong with what you quoted? Also, can you answer if I was right about doing a 1-tailed test when the authors got a p-value for a 2-tailed test?
 July 15th, 2018, 04:23 AM #4 Senior Member   Joined: Oct 2009 Posts: 733 Thanks: 247 The two tail test is indeed standard in this situation, and it is probably also the safest thing to do since the p-value is higher. I don't really understand your point that the authors were "looking for a type I error". In fact, we are always faced with both type I and type II errors, and we wish to avoid them. Let's look at what is probably the best statistician living today: One-tailed or two-tailed? - Statistical Modeling, Causal Inference, and Social Science To summarize his ideas: 1) Both one-tail and two-tail test are standard methodology 2) P-values are a flawed methodology anyway and should not be trusted or relied on to the degree that they are being used in research today. 3) The very interpretation of the p-value is not even understood by most researchers, who just interpret it as "chance my hypothesis is wrong or caused by chance". 4) Relying on p-values and hypothesis testing as a way to take statistical definitions like trusting a treatment to be effective or not, that is a habit that really really should die.
July 15th, 2018, 04:44 AM   #5
Senior Member

Joined: Oct 2013
From: New York, USA

Posts: 635
Thanks: 85

Quote:
 Originally Posted by Micrm@ss I don't really understand your point that the authors were "looking for a type I error". In fact, we are always faced with both type I and type II errors, and we wish to avoid them.
You taught me that I said that wrong because both errors are possible for each tail. What I was trying to say is that if you expect one value to be greater than the other (which the authors did) you use 1-tail and if you don't know which value will be greater you use 2-tails.

I'm writing something that I don't know how many people will read that is not for a class, and I don't expect readers to know many statistics because they aren't necessary to understand my thesis. Can you evaluate these notes I took from the source:

"The authors' hypothesis was that jurors who are given cautionary instructions use evidence for impermissible purposes, and thet Group B would have the higher conviction rate (Page 13). As stated in the Introduction, Group A had a conviction rate of 33.1% and Group B had a conviction rate of 48% (Page 13). A statistical Z test found a Z score of -2.4, meaning 2.4 standard deviations below the mean. The p-value for that Z score is 0.016, meaning that there was a 0.016 or 1.6 percent chance that the difference in conviction rate was random and not because the two groups thought differently. If the probability that something happened due to random chance is less than 5 percent, the results are statistically significant. Jurors were also told to rate the certainty of their verdict from 1 to 10. Group A averaged 6.4 and Group B averaged 7.0, which was a difference that is significant at p < .04, meaning 4 percent."

I don't want to not be able to use the source due to statistical errors in it. If you can find anything in general about what to do when your research finds mathematically flawed sources, that would be great.

Last edited by EvanJ; July 15th, 2018 at 04:51 AM.

 Tags 1tailed, 2tailed, question, test

 Thread Tools Display Modes Linear Mode

 Similar Threads Thread Thread Starter Forum Replies Last Post HenryAdams Algebra 2 June 9th, 2012 08:52 PM Eminem_Recovery Algebra 9 June 5th, 2012 07:07 PM Erimess Advanced Statistics 0 January 12th, 2012 07:10 PM FreaKariDunk Calculus 2 May 28th, 2011 08:51 PM momo Number Theory 11 April 5th, 2009 06:33 PM

 Contact - Home - Forums - Cryptocurrency Forum - Top