User Tools

Site Tools


rem4:t-tests_and_linear_models

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
rem4:t-tests_and_linear_models [2008/10/15 14:44] thorissonrem4:t-tests_and_linear_models [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 3: Line 3:
 ===Concepts=== ===Concepts===
  
-H1 | Your hypothesis: This is what you are predicting. E.g. "there is a difference between conditions A and B on measure C.  | +H<sub>1</sub> | Your hypothesis: This is what you are predicting. E.g. "there is a difference between conditions A and B on measure C.  | 
-H0 | "Null-hypothesis": This is the inverse of H1, "there will be no difference", \\ or more precisely, "your manipulation of the independent variable(s) does not cause anything" |+H<sub>0</sub> | "Null-hypothesis": This is the inverse of H1, "there will be no difference", \\ or more precisely, "your manipulation of the independent variable(s) does not cause anything" |
 | Probability  | Chance may always affect the outcome of any measurement, to a greater or larger extent. Doing lots and lots of measurements can help us be more certain that our results are not a coincidence. That is expensive and takes time. \\ We can save time by cutting back on doing lots of measurements if we understand the effects of taking fewer measurements on the probability of getting things wrong.  | | Probability  | Chance may always affect the outcome of any measurement, to a greater or larger extent. Doing lots and lots of measurements can help us be more certain that our results are not a coincidence. That is expensive and takes time. \\ We can save time by cutting back on doing lots of measurements if we understand the effects of taking fewer measurements on the probability of getting things wrong.  |
 | Statistical test | Helps us estimate the likelihood of us being wrong.   | | Statistical test | Helps us estimate the likelihood of us being wrong.   |
 | One- and Two-Tailed Tests  | Scenario: You measure something under two conditions, you expect there to be a difference between the measures. If you have strong suspicions that one measure will be higher than the other, you use a one-tailed test. \\ As you know, results of any experiment could be a coincidence. A statistical test helps us figure out what the probability of this is. \\ If we have a pre-determined idea of which direction a certain difference will be, our hypothesis is **stronger** than if our hypothesis simply says "there will be a difference". \\ This needs to be taken into account in the statistical test, when figuring out if the result might have been a coincidence. For a hypothesis that specifies the direction of a difference, use one-tailed, otherwise use two-tailed.  | | One- and Two-Tailed Tests  | Scenario: You measure something under two conditions, you expect there to be a difference between the measures. If you have strong suspicions that one measure will be higher than the other, you use a one-tailed test. \\ As you know, results of any experiment could be a coincidence. A statistical test helps us figure out what the probability of this is. \\ If we have a pre-determined idea of which direction a certain difference will be, our hypothesis is **stronger** than if our hypothesis simply says "there will be a difference". \\ This needs to be taken into account in the statistical test, when figuring out if the result might have been a coincidence. For a hypothesis that specifies the direction of a difference, use one-tailed, otherwise use two-tailed.  |
-|     | 
-|     | 
  
 \\ \\
Line 21: Line 19:
 | Sample  | 20 individual fish are tested.   | | Sample  | 20 individual fish are tested.   |
 | Variables  | Dependent: Health.  Independent: Oceanic area (N,S).  Dependent variable is measured with the "famous Health Probe" | | Variables  | Dependent: Health.  Independent: Oceanic area (N,S).  Dependent variable is measured with the "famous Health Probe" |
-| Subject pool  | N=20; random sample.  |+| Subject pool  | N=20; random sample. Specify by which means/method the randomness is generated and followed.  |
 | Gathering data  | Repeated measures: 20 measurements for indexes of health: \\ North:97,99,88,77,99,20,87,88,89,65; \\ South:66,48, ....   | | Gathering data  | Repeated measures: 20 measurements for indexes of health: \\ North:97,99,88,77,99,20,87,88,89,65; \\ South:66,48, ....   |
 | **What we have so far**  | Basically, we have a bunch of measurements which came from two different parts of the country. They will probably have a different mean, median, etc. -- it's unlikely that they will be equal. This difference, we would like to find out -- is it a true representation of the actual fish population in each of these two different locations?  | | **What we have so far**  | Basically, we have a bunch of measurements which came from two different parts of the country. They will probably have a different mean, median, etc. -- it's unlikely that they will be equal. This difference, we would like to find out -- is it a true representation of the actual fish population in each of these two different locations?  |
Line 36: Line 34:
  
  
-=== T-Tests ===+=== t-tests ===
  
 | A.k.a.  | "Student's t-test"  | | A.k.a.  | "Student's t-test"  |
Line 49: Line 47:
 | One-sample and two-sample t-test  | In the fish example above we have two separate sample populations, hence we use two-sample t-test.  | | One-sample and two-sample t-test  | In the fish example above we have two separate sample populations, hence we use two-sample t-test.  |
 | One-sample alternative names  | Matched-sample t-test, Paired t-test, Repeated-measures t-test.  | | One-sample alternative names  | Matched-sample t-test, Paired t-test, Repeated-measures t-test.  |
-Formula for t-test  | http://upload.wikimedia.org/math/f/a/c/fac1832e2955cb829f855fd36736ab9a.png  |+More information  | http://biology.nebrwesleyan.edu/courses/labs/biology_of_animals/t-test_flash.html  |
  
 \\ \\
Line 55: Line 53:
 \\ \\
 \\ \\
 +
 +=== Linear Models: Regression Analysis ===
 +
 +| Purpose of Regression Analysis  | Discover a function that allows prediction of the values of dependent variable y based on values of independent variable x  |
 +| Scatterplot  | Shows the distribution of y-values for given (sampled) x-values 
 +| First-order linear function  | Y = A + bX \\ Provides us with a single, straight line that gets as close to all the points in the scatterplot as possible (given that it is straight)  |
 +| Residual  | For each x,y point, the distance to the line   |
 +| How do we find the line?  | Least Squares Criterion: We select the linear function that will yield the smallest sum of squared residuals  |
 +
 +\\
 +\\
 +\\
 +\\
 +
 +===Linear Correlation===
 +
 +| Given a linear function  | Given an X-score, the predicted Y-score is given by the line. However, in reality the Y-score rarely falls straight on the line.   |
 +| Need estimate of error  | We must estimate how closely real Ys (Y) follow the predicted Ys (Y' |
 +| The measure most commonly used  | Standard Error of Estimate  |
 +| Formula for Std. Err. of Est. | http://cs.gmu.edu/cne/modules/dau/stat/regression/multregsn/mreg_2_frm.html   |
 +| What it tells us  | How far, on average, real Ys fall from the line  |
 +| The smaller the Std. Err. of Est. is ... | ... the better a predictor the line is  | 
 +| Main limitation of linear models  | Assumes -- apriori! -- a linear relationship  |
 +
 +\\
 +\\
 +\\
 +\\
 +
 +EOF
/var/www/cadia.ru.is/wiki/data/attic/rem4/t-tests_and_linear_models.1224081883.txt.gz · Last modified: 2024/04/29 13:33 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki