A study of many notable earthquakes in the past century showed they have an average magnitude of 7.13, with a standard deviation of 0.73 (although earthquake magnitudes are not normally distributed).
Scientists would like to do some more detailed research on a small sample of these notable earthquakes, and want to make sure they take a large enough sample.
Given different sample sizes, what is the probability that the sample will have a mean magnitude less than the 7? (you can explore this by sliding the value of n higher or lower on the graph)
As you increase the sample size, n, what happens to the probability of the sample mean being too small (less than 7)?