Random sample = "the easiest and best way to get an unbiased sample from a population." If the sample is representative of the overall population, then we can generalize our results.
Variable = aspect of person/subject that can change. Independent variable = the variable that is systematically manipulated. Dependent variable = the outcome variable that is measured after the independent variable has been manipulated. Change in the independent variable is expected to cause change in the dependent variable.
Internal validity = the extent to which the effect of an independent variable on a dependent variable has been correctly interpreted. External validity = the extent to which a research finding can be generalized to other situations.
Quasi-experiment = like experiments except participants are not randomly assigned to conditions on one or more variable.
Experiment vs. Survey. If you measure dependent variables and manipulate independent variables you are doing an experiment; if you measure both then you are doing a survey.
Things that can compromise an experiment's internal validity, leading researchers to incorrectly believe that the IV caused the DV:
- Maturation effects -- sometimes change occurs because time has passed and something other than the independent variable has caused the change. Example #1: practice effects, change occurs because participant gained practice from doing pretest. Example #2: fatigue effects, participant has become bored or tired by the time they come to the posttest. Example #3: history effects, any change to the participant's circumstances over the course of a study (e.g., change in the economy or political climate).
- Testing effects --
* * * * *
Central tendency: mean, median, mode. Use program SPSS.
Mean formula 3413. Median formula 3428.
Must generally use two summary statistics to characterize a data set: a measure of central tendency and a measure of dispersion. One way to measure dispersion is the range of scores in the data set. The range is the difference between the lowest and highest scores; e.g., if the data set is 1, 2, 3, 4, 5, 6, 7, then the range is 6 (7 - 1). But the range can be misleading; e.g., if we measured the height in two towns (145).
So in order to quantify dispersion more accurately, we need to (1) measure the average distance of all the scores from the mean, (2) square all the deviations from the mean, (3) sum all the scores, (4) divide sum by the number of scores. This gives us the variance. The variance is the average distance of all the scores from the mean. To convert the variance to the unit of measurement in question, we then square the variance, and this gives us the standard of deviation.
Normal distribution = bell curve that we see in such things as people's IQ scores, height, pulse rate; the distribution is symmetrical about its mean, so its mean, median, and mode are all equal.
Most of the scores are close to the mean >> small SD.
* * * * *
How do we know when the difference between the intervention and control groups is meaningful or simply due to chance? There will also be some difference between two groups' means.
IQ scores are normally distributed, have mean of 100, SD of 15.
No comments:
Post a Comment