Chapter 2. The Normal and t-Distributions: An Introduction to Business Statistics with Interactive Spreadsheets, First Canadian Edition (2023)

main body

The normal distribution is simply a distribution with a specific shape. That's itnormalbecause many things have this same form. The normal distribution is thebell-shaped distributionwhich describes how so many natural, machine or man-made performance outcomes are distributed. If you've ever attended a class where you were "graded on a bell curve," the instructor adjusted the class grades to a normal distribution; It's not bad practice when the class is large and the tests are objective, since human performance is normally distributed in such situations. This chapter covers the normal distribution and then moves on to a common sampling distribution, the t-distribution. EITHERt distributioncan be formed by taking many samples (strictly speaking, all possible samples) of the same size from a normal population. The same statistics are called up for each samplet-Statistics, which we will see later, is calculated. The relative frequency distribution of these t-statistics is the t-distribution. It turns out that t-statistics can be calculated in many different ways for samples taken in many different situations and still have the same relative frequency distribution. This makes the t-distribution useful for drawing many different conclusions, which is why it is one of the most important connections between samples and populations used by statisticians. Between the discussion of the normal and t-distribution, we will discuss the central limit theorem. The t-distribution and the central limit theorem give us an idea of ​​the relationship between the sample means and the population means, which allows us to draw conclusions about the population mean.

The way the t-distribution is used to infer populations from samples is the model for many inferences that statisticians make. As a statistician, when you learn to draw inferences, try to understand the general pattern of inferences as well as the specific cases presented. In short, the general model for drawing conclusions is to use knowledge of the statistics of a sample distribution, such as the t-distribution, as a guide to the likely bounds at which the sample falls relative to the population. Remember that the sample you use to draw inferences about the population is only one of many possible samples from the population. Samples vary, with some being very representative of the population, fairly representative, and others not very representative. Assuming that the sample is at least reasonably representative of the population, the sampling distribution can be used as a link between the sample and the population, allowing you to infer specific characteristics of the population.

These ideas will be developed later. The immediate goal of this chapter is to introduce you to the normal distribution, the central limit theorem, and the t-distribution.

Normal distributions are bell-shaped and symmetric. Mean, median and mode are the same. Most members of a normally distributed population have scores close to the mean: in a normal population, 96% of members (much better than Chebyshev's 75%) fall within 2sideMiddle.

Statistiker haben herausgefunden, dass viele Dinge normalverteilt sind. In der Natur sind Gewicht, Länge und Umfang aller Arten von Pflanzen und Tieren normal verteilt. In der Fertigung sind Durchmesser, Gewicht, Festigkeit und viele andere Eigenschaften von von Menschen und Maschinen hergestellten Gegenständen normal verteilt. Bei der menschlichen Leistung sind die Ergebnisse objektiver Tests, die Ergebnisse vieler sportlicher Übungen und die Notendurchschnitte von College-Studenten normalverteilt. Die Normalverteilung ist wirklich ein normales Ereignis.

If you are skeptical you will wonder how GPAs and the exact diameter of holes drilled by a machine can have the same distribution, they are not even measured in the same units. To see that so many things have the same normal form, they all have to be measured in the same units (or omit the units) - they all have to bestandardized. Statisticians standardize many measures with thatstandard deviation. All normal distributions have the same shape because they all have the same distribution of relative frequencies.when the values ​​of its members are measured in standard deviations above or below the mean.

Using the typical Canadian measurement system, when domestic dogs are normally distributed in weight with a mean of 10.8 kg and a standard deviation of 2.3 kg and daily sales at The First Brew Expresso Cafe are normally distributedMetro= $ 341,46 zside= $53.21, so the same proportion of domestic dogs weigh between 8.5 kg(m-1s)and 10.8 kilos (Metro) as a proportion of the daily turnover of Primera Cerveza, which lies in betweenm-1p(US$ 288,25) eMetro($341.46). Any normally distributed population has an equal proportion of its members between the mean and one standard deviation below the mean. Converting the values ​​of members of a normal population so that each is now expressed in terms of standard deviations from the mean makes all populations equal. This process is known asstandardization, and ensures that all normal populations have the same location and shape.

This standardization process is performed by calculating az-Scorefor every member of the general population. The z-score is found by:

[latex]z = (x - \mu)/\sigma[/latex]

This converts the original value in its original units to a standardized value in units ofstandard deviations from the mean. See the formula. The numerator is simply the difference between the value of that member of the populationX, and the population averageMetro. It can be measured in inches, points, or anything else. The denominator is the population standard deviation,side, and it's also measured in centimeters or points or whatever. If the numerator is 15 cm and the standard deviation is 10 cm, z is 1.5. That particular member of the population, the one with a diameter that is 15 cm larger than the mean diameter of the population, has a z-score of 1.5 because its value is 1.5 standard deviations larger than the mean. because the averageXThatMetro, the mean of the z-scores is zero.

We could convert the value of each member of any normal population into a z-score. If we did this for a normal population and plotted these z-scores on a relative frequency distribution, they would all be the same. Each of these standardized normal distributions would have zero mean and the same shape. There are many tables showing what proportion of a normal population has a z-score that is less than a certain value. Because the standard normal distribution is symmetric with zero mean, the same fraction of the population that is less than some positive z is also greater than that same negative z. Some values ​​of anormal patternTable appears in Table 2.1

Table 2.1 Standard normal table
share below0,750,900,950,9750,990,995
z-Score0,6741.2821.6451.9602.3262.576

You can also use the interactivecumulative standard normal distributionsshown in the Excel template in Figure 2.1. The graph above calculates the z-score when any probability value is entered in the yellow cell. The chart below calculates the z-probability for each z-value in the yellow cell. In either case, the corresponding standard normal distribution plot with cumulative probabilities is shown in yellow or purple.

(Video) Quantitative Data Analysis 101 Tutorial: Statistics Explained Simply + Examples


Figure 2.1 Interactive Excel template for cumulative standard normal distributions; see Appendix 2.

The production manager at a brewery in Delta, BC, asked one of his technicians, Kevin, "How much does a 24-pack of beer typically weigh?" Kevin asks the quality control staff what they know about the weight of those packages, and they tell him that the average weight 16.32 kg with a standard deviation of 0.87 kg. Kevin decides that the production manager probably wants more than average weight and decides to give his boss the weight range that 95% of 24-bottle beer packages fall within. Kevin sees that 2.5% (0.025) in the left end and 2.5% (0.025) in the right end leaves 95% (0.95) in the middle. You assume that the weights of the packages are normally distributed, a reasonable assumption for a machine-made product, and you see in a standard normal table that 0.975 of the members of a normal population have a z-score of less than 1.96 and less than 0.975 of the members of a normal population have a z-score less than 1.96 and that 975 have a z-score greater than -1.96, so 0.95 have a z-score between ±1.96.

Now that he knows that 95% of the 24 packs of beer bottles have a z-score weight of ±1.96, Kevin can translate those zs. Solve the equation for +1.96 and -1.96 and find the bounds of the range where 95% of the package weight falls:

[latex]1,96=(x-16,32)/0,87[/latex]

solution forX, Kevin discovers that the upper limit is 40 pounds. Then solve for z=-1.96:

[latex]-1.96=(x-16.32)/.87[/latex]

Think the lower limit is 30 pounds. Now you can go to your manager and say, "95% of 24-bottle beer packs weigh between 30 and 40 pounds."

If this were a statistics course for math students, you would probably need to prove this theorem. Since this book is designed for economics students and other non-mathematicians, all you need to do is learn to understand what the theorem says and why it's important. To understand what it says, it helps to understand why it works. Here's an explanation of why this works.

The theorem addresses sampling distributions and the relationship between the location and shape of a population and the location and shape of a sampling distribution generated from that population. In particular, the central limit theorem explains the relationship between a population and the distribution of sample means found by taking all possible samples of a given size from the original population, finding the mean of each sample, and organizing them into a distribution.

The sampling distribution of means is a simple concept. Suppose you have a population ofX's. You take a sampleNortethoseX's and find the mean of this sample, which gives you aX. Then take another sample of the same size,Norte, and find yoursX. Repeat this until you have selected all possible sample sizes.Norte. You have generated a new population, a population ofX's. Match this population to a distribution and you have the sampling distribution of means. You can find the sampling distribution of medians, variances, or other sampling statistics by collecting all possible samples of a given size.Norte, find the median, variance, or other statistical information about each sample and arrange them into a distribution.

The central limit theorem deals with the sampling distribution of means. Link the sampling distribution ofX's with the original distribution ofX's. He tells us that:

(1) The mean of the sample means is equal to the mean of the original population,MetroX= subway. He doesXan unbiased appraiser ofMetro.

(2) The distribution ofX's will be bell-shaped, regardless of the shape of the original distributionX's.

(Video) Statistics in Excel Tutorial 1.1. Descriptive Statistics using Microsoft Excel

That makes sense if you stop and think about it. This means that only a small fraction of the samples have means that are far from the population mean. For a sample to have a mean, that's far from itMetroX, almost all of its members must be from the right end of the distributionX's, or almost all must be of the left tail. There are many other samples with most of their members in the middle of the distribution, or with some members on the right end and some on the left, and all of these examples have oneXFenceMetroX.

(3a) The larger the sample, the closer to normal the sample distribution will be, and

(3b) If the distribution ofXis normal, as is the distribution ofX's.

They come from the same basic reasoning as (2), but would require a formal proof, such asnormal distributionit's a mathematical concept. It's not too hard to see that larger samples give a more "bell-shaped" distribution of sample means than smaller samples, and that's what (3a) works for.

(4) The variance ofXs is equal to the variance ofXdivide by the sample size or:

[latex]\sigma^2_x = \sigma^2/n[/latex]

therefore the standard deviation of the sample distribution is:

[latex]\sigma_x = \sigma/\sqrt{n}[/latex]

While it is difficult to understand why this exact formula holds without being formally tested, the basic idea that larger samples produce sampling distributions with smaller standard deviations can be intuitively understood. If [latex]\sigma_{\bar{x}} = \sigma_{\bar{x}}/\sqrt{n}[/latex] then [latex]\sigma_{\bar{x}} < \sigma_A[ /Latex]. In addition to the sample sizeNorteBedroom,side2Xgets smaller because it becomes more unusual to get a sample with aXwhich is far from itMetroI likeNortebecomes bigger. The standard deviation of the sample distribution includes a(X-Metro)for everyone, but remember that there aren't manyXThat's so far fromMetrohow they existX's, that are far fromMetro, and as n increases, there are fewer and fewer samples with aXfar away fromMetro. That means there aren't many(X-Metro)who are as big as some(xm)is. If you square everything, the mean will be much smaller than the mean.(xm)2, then [latex]\sigma_{\bar{x}}[/latex] is less thansideX. If the mean volume of the soft drink in a population of 355 mL cans is 360 mL with a variance of 5 (and a standard deviation of 2.236), then the sampling distribution of the sample means of nine cans has a mean of 360. mL and a variance of 5/9 = 0.556 (and a standard deviation of 2.236/3 = 0.745).

You can also use the interactive Excel template in Figure 2.2 to illustrate thiscentral limit theorem. Just double-click the yellow worksheet cell named CLT(n=5) or the yellow worksheet cell named CLT(n=15) and press Enter. MakenoTry changing the formula in those yellow cells. This automatically takes a sample from the population distribution and recreates the associated sampling distributionX. You can repeat this process by double-clicking the yellow cell to see that regardless of the population distribution, the sampling distribution ofXThatannormal. You will also notice that the population mean and sampling distribution ofXthey are always the same.


Figure 2.2 Interactive Excel template illustrating the central limit theorem; see Appendix 2.

Following the same line of reasoning, you can see from the model in Figure 2.2 that the sampling error decreases as you perform the resampling processes with n=5 and then n=15. If you change the sample size from 5 to 15 (from CLT(n=15) to CLT(n=5)), you can also notice that as the sample size increases, the variance and standard deviation of the sample distribution decrease. Remember that with the samples with increasing sample sizeXwhich is far from itMetroare becoming increasingly rare, so the average(X-Metro)2becomes smaller The average(X-Metro)2is the variance.

Back to the soda example. When larger samples of soda bottles are collected, say samples of 16, even fewer samples have mean values ​​far from the 360 ​​ml mean. The variance of the sample distribution at n = 16 will therefore be smaller. Based on what you just learned, the variance is only 5/16 = 0.3125 (and the standard deviation is 2.236/4 = 0.559). The formula corresponds to what is happening logically; As the samples increase, the probability of obtaining a sample with a mean far from the population mean decreases, so the sampling distribution of the means and variance (and standard deviation) decreases. In the formula, divide the population variance by the sample size to get the variance of the sample distribution. Because larger samples mean dividing by a larger number, the variance decreases as the sample size increases. When using the sample mean to derive the population mean, using a larger sample increases the likelihood that your conclusion is very close, because more sample means are very close to the population mean. Obviously there is a compromise here. The reason you wanted to use stats in the first place was so you didn't have to bother collecting a lot of data, but as you collect more data your stats are likely to be more accurate.

(Video) CQMS703 week 2 autocorrelation introduction

The central limit theorem tells us something about the relationship between the sampling distribution of means and the original population. Note that if we want to know the variance of the sample distribution, we need to know the variance of the original population. You don't need to know the variance of the sample distribution to do a point estimate of the mean, but other more sophisticated estimation techniques require that you know or estimate the population variance. If you think about it for a moment, you'll realize how odd it would be to know the population variance when you don't know the mean. Because you need to know the population mean to calculate the population variance and standard deviation, you only know the population variance without the population mean in textbook examples and problems. The usual case is when you need to estimate the population mean and variance. Statisticians have figured out how to deal with these cases by using the sample variance as an estimate of the population variance (and using it to estimate the variance of the sample distribution). Keep in mind that this is an unbiased estimate ofside2. Also remember that the variance of the sampling distribution of the means is related to the variance of the original population by the following equation:

[latex]\sigma^2_x = \sigma^2/n[/latex]

Therefore, the estimated standard deviation of a sampling distribution of means is:

[latex]% estimated\;\sigma_x = s/\sqrt{n}[/latex]

Following this train of thought, statisticians discovered that they had a statistic calledt valuefor each sample and its placement in a relative frequency distribution, the distribution would be the same for samples of the same size drawn from any normal population. The shape of this t-sampling distribution varies somewhat as the sample size varies, but for allNorte, is always the same. For example, with samples of 5, 90% of the samples have t-values ​​between -1.943 and +1.943, while with samples of 15, 90% have t-values ​​between ±1.761. The larger the sample, the narrower the assessment range that a certain proportion of the sample covers. This t-score is calculated using the following formula:

[latex]t=(x-\mu)/(s/\sqrt{n})[/latex]

If you compare the t-score formula to the z-score formula, you can see that the t is just an estimated z. Because there is a t-value for each sample, t is just another sampling distribution. It turns out there are other things that can be calculated from a sample that has the same distribution as this t. Note that we use the sample standard deviation, s, to calculate each t-value. Since we're using s, we're wasting one degree of freedom. Because there are other useful sampling distributions that have the same shape but use multiple degrees of freedom, it is common to refer to the t-distribution as the distribution for a specific number of samples rather than the distribution for a specific sample size. Degree. .of freedom (df). There are published tables showing the shapes of the t-distributions, organized by degrees of freedom, so they can be used in all situations.

If you look at the formula, you can see that the average t value is zero because the averageXis equivalent toMetro. Every t-distribution is symmetric with half the t-values ​​positive and half negative because we know from the central limit theorem that if the original population is normal, the sampling distribution of means is normal and therefore symmetric. .

Table 2.2 shows an extract from a typical t-table. Notice that there is a line for each degree of freedom. At the top are the proportions of the distributions that fall outside the tail: the amount shaded in the figure. The body of the table shows which t-value allocates most of the t-distribution to the df of the shaded area in the tail, which t-value leaves that t-portion to its right. For example, if you select all possible samples with 9 df and find the t-value for each, 0.025 (2 1/2%) of those samples would have t-values ​​greater than 2.262, and 0.975 would have t-values ​​less than 2,262.

Table 2.2 Example of a Student's t-table
dfProbability = 0.10Probability = 0.05Probability - 0.025Probability = 0.01Probability = 0.005
13.0786.31412h7013.8163,65
51.4762.0152.5713.3654.032
61.4401.9432.4473.1433.707
71.4151.8952.3652.9983.499
81.3971.8602.3062.8963.355
91.3831.8332.2622.8213.250
101.3721.8122.2282.7643.169
201.3251.7252.0862.5282.845
301.3101.6972.0462.4572.750
401.3031.6842.0212.4232.704
infinity1.2821.6451.9602.3262.58

In Table 2.2, a sample from Student's t-table shows the probability of exceeding the value in the body. For 5 df, there is a 0.05 probability that a sample will have a t-value > 2.015.

For a more interactive t-table along with the t-distribution, follow the Excel template in Figure 2.3. You can simply change the values ​​in the yellow cells to show the intersection of the t-table and its distribution.


Figure 2.3 Interactive Excel model of a t-table; see Appendix 2.

Because the t distributions are symmetric, 2 1/2% (0.025) of the t with 9 df is greater than 2.262, then 2 1/2% is less than -2.262. The 95% (0.95) mean of t is at 9 df between -2.262 and +2.262. The average 0.90 of the t values ​​at 14 df is within ±1.761 since -1.761 leaves 0.05 in the left margin and +1.761 leaves 0.05 in the right margin. The t-distribution approaches the normal distribution as the number of degrees of freedom increases. As a result, the last row of the t-table for df infinity can also be used to find the z-scores that leave different sampling proportions in the tail.

(Video) Week 1 Full Class Video - Fundamentals of Business Statistics

What could Kevin have done if he'd been asked, "How much does a 24-pack of beer weigh?" and couldn't just find good population data? Since you are familiar with statistics, you could take a sample and infer the population mean. Since the weight distribution of 24 bottle beer packs is the result of a manufacturing process, it is almost certainly normal. The properties of almost all manufactured products are normally distributed. In a manufacturing process, even if it is precise and well-controlled, each individual part will vary slightly as temperature varies slightly, power intensity varies as other machines are turned on and off, raw material consistency varies slightly, and dozens of other forces affecting that affect end result may differ slightly. Most packages or screws or whatever are made are very average in weight or size, with just as many being slightly heavier or larger than slightly lighter or smaller. Although the process should produce a population of "identical" items, there will be some discrepancies between them. That makes so many populations normally distributed. Since the distribution of the weights is normal, Kevin can use the t-table to find the shape of the distribution of the sample t-values. Because you can use the t-table to find the shape of the distribution of the sample t-values, you can get a good inference about the average weight of a 24-pack of beer. So he was able to draw this conclusion:

STEP 1. Take a sample ofNorte, say 15 packs of beer bottles and carefully weigh each pack.

STEP 2. FindXmisFor the sample

STEP 3 (where the hard part begins). Look at the t-table and find the t-scores that leave some ratio, say 0.95, of the t-samplen-1df in the middle.

STEP 4 (the heart of the hard part). Suppose the sample has a t-value that is in the middle of the t-value distribution.

STEP 5 (arithmetic). CarryX,s,Norteand t from the t-table and set up two equations, one for each of the two t-values ​​in the table. If you solve each of these equations forMetro, find an interval that you are 95% confident about (a statistician would say "with 0.95 confidence") that contains the population mean.

Kevin decides that's how he'll answer the question. Your sample contains beer cartons weighing:

16.25, 15.89, 16.25, 16.35, 15.9, 16.25, 15.85, 16.12, 17.16, 18.17, 14.15, 16.25, 17.025, 16.2, 17.025

find its average sample,X= 16.32 kilograms and its sample standard deviation (taking into account the use of the sample formula),s= 0.87kg. The t-table tells you that 0.95 of the t-pattern with 14 df is within ±2.145. Solve these two equationsMetro:

[Latex]+2,145 = (36,32 - \mu)/(,87/\sqrt{14})\;\;\;e\;\;\;-2,145 = (36,32 - \mu)/ (,87/ \sqrt{14})[/latex]

FindMetro= 15.82 kilograms andMetro= 16.82 kilograms. Based on these results, Kevin can report that he is "95% confident that the average weight of a 24-pack of beer is between 40 and 40 pounds." Note that this is different from the previous example when you knew more about the population.

A lot of material has been covered in this chapter, and not much of it has been easy. We are now coming to real statistics, and it will require care on your part if you wish to continue to understand the statistics.

The structure of the chapter is simple:

(Video) Statistics 101: Single Sample Hypothesis t-test Examples

  • A lot of things are evenly distributed, at least when we standardize member values ​​into z-scores.
  • The central limit theorem provides statistics users with a lot of useful information about how the sampling distribution ofXrefers to the original population ofX's.
  • The t-distribution allows us to do many of the things that the central limit theorem allows, even when the population variancesX, Not known.

We'll soon see that statisticians have learned about other sampling distributions and how to use them to infer populations from samples. It is through these known sampling distributions that most statistics are generated. It is these known sample distributions that give us the connection between the sample we have and the population we want to draw a conclusion about.

Videos

1. DSME 2011 | Section C | 21 NOV 2022 | Lesson 21| Statistical Analysis for Business Decisions
(Chiu Yu Ko)
2. Statistics - A Full University Course on Data Science Basics
(freeCodeCamp.org)
3. MBA Research Lecture 1: Variables & Descriptive Stats (+ HW 1 Tutorial, Ch. 2 problems)
(Tyler Watts)
4. Free 2 Hour Fiber Optic Training
(Fiber Instrument Sales)
5. QuickBooks Online for Construction & Contractors (Part 1/6)
(Hector Garcia CPA)
6. T Accounts Explained SIMPLY (With 5 Examples)
(Accounting Stuff)
Top Articles
Latest Posts
Article information

Author: Aron Pacocha

Last Updated: 04/09/2023

Views: 5972

Rating: 4.8 / 5 (68 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Aron Pacocha

Birthday: 1999-08-12

Address: 3808 Moen Corner, Gorczanyport, FL 67364-2074

Phone: +393457723392

Job: Retail Consultant

Hobby: Jewelry making, Cooking, Gaming, Reading, Juggling, Cabaret, Origami

Introduction: My name is Aron Pacocha, I am a happy, tasty, innocent, proud, talented, courageous, magnificent person who loves writing and wants to share my knowledge and understanding with you.