What is the law of averages. The law of averages or what is the secret of successful sellers. Weak law of large numbers


What is the secret of successful sellers? If you watch the best salespeople of any company, you will notice that they have one thing in common. Each of them meets with more people and makes more presentations than the less successful salespeople. These people understand that sales is a numbers game, and the more people they tell about their products or services, the more deals they close, that's all. They understand that if they communicate not only with those few who will definitely say yes to them, but also with those whose interest in their proposal is not so great, then the law of averages will work in their favor.


Your earnings will depend on the number of sales, but at the same time, they will be directly proportional to the number of presentations you make. Once you understand and begin to put into practice the law of averages, the anxiety associated with starting a new business or working in a new field will begin to decrease. And as a result, a sense of control and confidence in their ability to earn will begin to grow. If you just make presentations and hone your skills in the process, there will be deals.

Rather than thinking about the number of deals, think about the number of presentations. It makes no sense to wake up in the morning or come home in the evening and start wondering who will buy your product. Instead, it's best to plan each day for how many calls you need to make. And then, no matter what - make all those calls! This approach will make your job easier - because it's a simple and specific goal. If you know that you have a very specific and achievable goal in front of you, it will be easier for you to make the planned number of calls. If you hear "yes" a couple of times during this process, so much the better!

And if "no", then in the evening you will feel that you honestly did everything you could, and you will not be tormented by thoughts about how much money you have earned, or how many partners you have acquired in a day.

Let's say in your company or your business, the average salesperson closes one deal every four presentations. Now imagine that you are drawing cards from a deck. Each card of three suits - spades, diamonds and clubs - is a presentation where you professionally present a product, service or opportunity. You do it the best you can, but you still don't close the deal. And each heart card is a deal that allows you to get money or acquire a new companion.

In such a situation, wouldn't you want to draw as many cards from the deck as possible? Suppose you are offered to draw as many cards as you want, while paying you or suggesting a new companion each time you draw a heart card. You will begin to draw cards enthusiastically, barely noticing what suit the card has just been pulled out.

You know that there are thirteen hearts in a deck of fifty-two cards. And in two decks - twenty-six heart cards, and so on. Will you be disappointed by drawing spades, diamonds or clubs? Of course not! You will only think that each such "miss" brings you closer - to what? To the card of hearts!

But you know what? You have already been given this offer. You are in a unique position to earn as much as you want and draw as many heart cards as you want to draw in your life. And if you just "draw cards" conscientiously, improve your skills and endure a little spade, diamond and club, then you will become an excellent salesman and succeed.

One of the things that makes selling so much fun is that every time you shuffle the deck, the cards are shuffled differently. Sometimes all the hearts end up at the beginning of the deck, and after a successful streak (when it already seems to us that we will never lose!) We are waiting for a long row of cards of a different suit. And another time, to get to the first heart, you have to go through an infinite number of spades, clubs and tambourines. And sometimes cards of different suits fall out strictly in turn. But in any case, in every deck of fifty-two cards, in some order, there are always thirteen hearts. Just pull out the cards until you find them.



From: Leylya,  

The average value is the most general indicator in statistics. This is due to the fact that it can be used to characterize the population according to a quantitatively varying attribute. For example, to compare the wages of workers in two enterprises, one cannot take wage two specific workers, since it acts as a varying indicator. Also, the total amount of wages paid in enterprises cannot be taken, since it depends on the number of employees. If we divide the total amount of wages of each enterprise by the number of employees, we can compare them and determine which enterprise has a higher average wage.

In other words, the wages of the studied population of workers receive a generalized characteristic in the average value. It expresses the general and typical that is characteristic of the totality of workers in relation to the trait under study. In this value, it shows the general measure of this attribute, which has a different value for the units of the population.

Determination of the average value. The average value in statistics is a generalized characteristic of a set of similar phenomena according to some quantitatively varying attribute. The average value shows the level of this feature, related to the population unit. With the help of the average value, it is possible to compare various aggregates with each other according to varying characteristics (income per capita, crop yields, production costs at various enterprises).

The average value always generalizes the quantitative variation of the feature by which we characterize the population under study, and which is equally inherent in all units of the population. This means that behind any average value there is always a series of distribution of units of the population according to some varying attribute, i.e. variation series. In this respect, the average value is fundamentally different from relative values and, in particular, on indicators of intensity. The intensity indicator is the ratio of the volumes of two different aggregates (for example, the production of GDP per capita), while the average one generalizes the characteristics of the elements of the aggregate according to one of the characteristics (for example, the average wage of a worker).

Mean value and the law of large numbers. In the change of average indicators, a general trend is manifested, under the influence of which the process of development of phenomena as a whole is formed, while in individual individual cases this trend may not be manifested clearly. It is important that averages be based on a massive generalization of the facts. Only under this condition will they reveal the general trend underlying the process as a whole.


The essence of the law of large numbers and its significance for averages, as the number of observations increases, more and more completely cancels out the deviations generated by random causes. That is, the law of large numbers creates conditions for a typical level of a varying attribute to appear in the average value under specific conditions of place and time. The value of this level is determined by the essence of this phenomenon.

Types of averages. Mean values ​​used in statistics belong to the class of power-law means, general formula which has the following form:

Where x is the power mean;

X - changing values ​​of the attribute (options)

- number option

The exponent of the mean;

Summation sign.

For different values ​​of the exponent of the mean, different types of the mean are obtained:

Arithmetic mean;

Mean square;

Average cubic;

Average harmonic;

Geometric mean.

Different kinds average values ​​have different values ​​when using the same source statistical materials. At the same time, the larger the exponent of the average, the higher its value.

In statistics, the correct characterization of the population in each individual case is given only by a completely definite type of average values. To determine this type of average value, a criterion is used that determines the properties of the average: the average value will only then be a true generalizing characteristic of the population according to the varying attribute, when, when all variants are replaced by the average value, the total volume of the varying attribute remains unchanged. That is, the correct type of the average is determined by how the total volume of the variable feature is formed. So, the arithmetic mean is used when the volume of the variable feature is formed as the sum of individual options, the mean square - when the volume of the variable feature is formed as the sum of squares, the harmonic mean - as the sum of the reciprocal values ​​of the individual options, the geometric mean - as the product of individual options. In addition to average values ​​in statistics

The descriptive characteristics of the distribution of a variable feature (structural averages), mode (the most common variant) and median (middle variant) are used.

Lecture 8. Section 1. Probability theory

Issues under consideration

1) The law of large numbers.

2) Central limit theorem.

The law of large numbers.

The law of large numbers in a broad sense is understood as the general principle according to which, with a large number of random variables, their average result ceases to be random and can be predicted with a high degree of certainty.

The law of large numbers in the narrow sense is understood as a number of mathematical theorems, in each of which, under certain conditions, the possibility of approximating the average characteristics of a large number of tests is established.

to some definite constants. In proving theorems of this kind, Markov's and Chebyshev's inequalities are used, which are also of independent interest.

Theorem 1(Markov's inequality). If a random variable takes non-negative values ​​and has a mathematical expectation, then for any positive number the inequality

Proof we will carry out for a discrete random variable. We will assume that it takes values ​​from which the first ones are less than or equal and all the others are greater Then

where

Example 1 The average number of calls arriving at the factory switch in an hour is 300. Estimate the probability that in the next hour the number of calls to the switch:

1) will exceed 400;

2) will be no more than 500.

Solution. 1) Let the random variable be the number of calls arriving at the switch during an hour. The mean value is . So we need to evaluate. According to the Markov inequality

2) Thus, the probability that the number of calls will be no more than 500 is at least 0.4.

Example 2 The sum of all deposits in a bank branch is 2 million rubles, and the probability that a randomly taken deposit does not exceed 10 thousand rubles is 0.6. What can be said about the number of contributors?

Solution. Let a randomly taken value be the size of a randomly taken contribution, and the number of all contributions. Then (thousand). According to Markov's inequality, whence

Example 3 Let be the time of a student being late for a lecture, and it is known that, on average, he is late for 1 minute. Estimate the probability that the student will be at least 5 minutes late.

Solution. By assumption Applying the Markov inequality, we obtain that

Thus, out of every 5 students, no more than 1 student will be late by at least 5 minutes.

Theorem 2 (Chebyshev's inequality). .

Proof. Let a random variable X be given by a series of distributions

According to the definition of dispersion Let us exclude from this sum those terms for which . At the same time, since all terms are non-negative, the sum can only decrease. For definiteness, we will assume that the first k terms. Then

Hence, .

Chebyshev's inequality makes it possible to estimate from above the probability of a random variable deviating from its mathematical expectation based on information only about its variance. It is widely used, for example, in the theory of estimation.

Example 4 A coin is tossed 10,000 times. Estimate the probability that the frequency of the coat of arms differs from 0.01 or more.

Solution. Let us introduce independent random variables , where is a random variable with the distribution series

Then since it is distributed according to the binomial law with The frequency of appearance of the coat of arms is a random variable where . Therefore, the dispersion of the frequency of the appearance of the coat of arms is According to the Chebyshev inequality, .

Thus, on average, in no more than a quarter of the cases at 10,000 coin tosses, the frequency of the coat of arms will differ from by one hundredth or more.

Theorem 3 (Chebyshev). If are independent random variables whose variances are uniformly bounded (), then

Proof. Because

then applying the Chebyshev inequality, we obtain

Since the probability of an event cannot be greater than 1, we get what we want.

Consequence 1. If are independent random variables with uniformly bounded variances and the same mathematical expectation equal to A, That

Equality (1) suggests that random deviations of individual independent random variables from their common average value, when large in their mass, cancel each other out. Therefore, although the quantities themselves are random, their average at large, it is practically no longer random and close to . This means that if it is not known in advance, then it can be calculated using the arithmetic mean. This property of sequences of independent random variables is called the law of statistical stability. The law of statistical stability substantiates the possibility of applying the analysis of statistics in making specific management decisions.

Theorem 4 (Bernoulli). If in each of P independent experiments, the probability p of the occurrence of event A is constant, then

,

where is the number of occurrences of event A for these P tests.

Proof. We introduce independent random variables , where X i is a random variable with a distribution series

Then M(X i)=p, D(X i)=pq. Since , then D(X i) are limited in aggregate. It follows from Chebyshev's theorem that

.

But X 1 + X 2 + ... + X P is the number of occurrences of event A in a series of P tests.

The meaning of Bernoulli's theorem is that with an unlimited increase in the number of identical independent experiments, with practical certainty, it can be argued that the frequency of the occurrence of an event will differ arbitrarily little from the probability of its occurrence in a separate experiment ( statistical stability of the event probability). Therefore, Bernoulli's theorem serves as a bridge from the theory of applications to its applications.

Law of large numbers V theory of probabilities states that the empirical mean ( average) a sufficiently large finite sample from a fixed distribution close to the theoretical mean ( mathematical expectation) of this distribution. Depending on the type of convergence, the weak law of large numbers is distinguished when convergence in probability, and the strong law of large numbers when convergence almost everywhere.

There is always a finite number of trials for which, with any given probability, less than 1 the relative frequency of occurrence of some event will differ arbitrarily little from its probability.

The general meaning of the law of large numbers: the joint action of a large number of identical and independent random factors leads to a result that, in the limit, does not depend on chance.

Methods for estimating probability based on the analysis of a finite sample are based on this property. A good example is the prediction of election results based on a survey of a sample of voters.

Encyclopedic YouTube

    1 / 5

    ✪ Law of Large Numbers

    ✪ 07 - Probability theory. Law of Large Numbers

    ✪ 42 Law of Large Numbers

    ✪ 1 - Chebyshev's law of large numbers

    ✪ Grade 11, lesson 25, Gaussian curve. Law of Large Numbers

    Subtitles

    Let's take a look at the law of large numbers, which is perhaps the most intuitive law in mathematics and probability theory. And because it applies to so many things, it is sometimes used and misunderstood. Let me first give it a definition for accuracy, and then we'll talk about intuition. Let's take a random variable, say X. Let's say we know its mathematical expectation or population mean. The law of large numbers simply says that if we take the example of n-th number of observations of a random variable and average the number of all those observations... Let's take a variable. Let's call it X with a subscript n and a dash at the top. This is the arithmetic mean of the nth number of observations of our random variable. Here is my first observation. I do the experiment once and I make this observation, then I do it again and I make this observation, I do it again and I get this. I run this experiment n times and then divide by the number of my observations. Here is my sample mean. Here is the average of all the observations I made. The law of large numbers tells us that my sample mean will approach the mean of the random variable. Or I can also write that my sample mean will approach the population mean for the nth number going to infinity. I won't make a clear distinction between "approximation" and "convergence", but I hope you intuitively understand that if I take a fairly large sample here, then I get the expected value for the population as a whole. I think most of you intuitively understand that if I do enough tests with a large sample of examples, eventually the tests will give me the values ​​I expect, taking into account the mathematical expectation, probability and all that. But I think it's often unclear why this happens. And before I start explaining why this is so, let me give you a concrete example. The law of large numbers tells us that... Let's say we have a random variable X. It is equal to the number of heads in 100 tosses of the correct coin. First of all, we know the mathematical expectation of this random variable. This is the number of coin tosses or trials multiplied by the odds of any trial succeeding. So it's equal to 50. That is, the law of large numbers says that if we take a sample, or if I average these trials, I get. .. The first time I do a test, I toss a coin 100 times, or take a box with a hundred coins, shake it, and then count how many heads I get, and get, say, the number 55. This will be X1. Then I shake the box again and I get the number 65. Then again - and I get 45. And I do this n times, and then I divide it by the number of trials. The law of large numbers tells us that this average (the average of all my observations) will tend to 50 while n will tend to infinity. Now I would like to talk a little about why this happens. Many believe that if, after 100 trials, my result is above average, then according to the laws of probability, I should have more or less heads in order to, so to speak, compensate for the difference. This is not exactly what will happen. This is often referred to as the "gambler's fallacy". Let me show you the difference. I will use the following example. Let me draw a graph. Let's change the color. This is n, my x-axis is n. This is the number of tests I will run. And my y-axis will be the sample mean. We know that the mean of this arbitrary variable is 50. Let me draw this. This is 50. Let's go back to our example. If n is... During my first test, I got 55, which is my average. I have only one data entry point. Then after two trials, I get 65. So my average would be 65+55 divided by 2. That's 60. And my average went up a bit. Then I got 45, which lowered my arithmetic mean again. I won't plot 45 on the chart. Now I need to average it all out. What is 45+65 equal to? Let me calculate this value to represent the point. That's 165 divided by 3. That's 53. No, 55. So the average goes down to 55 again. We can continue these tests. After we have done three trials and come up with this average, many people think that the gods of probability will make it so that we get fewer heads in the future, that the next few trials will be lower in order to reduce the average. But it is not always the case. In the future, the probability always remains the same. The probability that I will roll heads will always be 50%. Not that I initially get a certain number of heads, more than I expect, and then suddenly tails should fall out. This is the "player's fallacy". If you get a disproportionate number of heads, it does not mean that at some point you will start to fall a disproportionate number of tails. This is not entirely true. The law of large numbers tells us that it doesn't matter. Let's say, after a certain finite number of trials, your average... The probability of this is quite small, but, nevertheless... Let's say your average reaches this mark - 70. You're thinking, "Wow, we've gone way beyond expectation." But the law of large numbers says it doesn't care how many tests we run. We still have an infinite number of trials ahead of us. The mathematical expectation of this infinite number of trials, especially in a situation like this, will be as follows. When you come up with a finite number that expresses some great value, an infinite number that converges with it will again lead to the expected value. This is, of course, a very loose interpretation, but this is what the law of large numbers tells us. It is important. He doesn't tell us that if we get a lot of heads, then somehow the odds of getting tails will increase to compensate. This law tells us that it doesn't matter what the result is with a finite number of trials as long as you still have an infinite number of trials ahead of you. And if you make enough of them, you'll be back to expectation again. This is an important point. Think about it. But this is not used daily in practice with lotteries and casinos, although it is known that if you do enough tests... We can even calculate it... what is the probability that we will seriously deviate from the norm? But casinos and lotteries work every day on the principle that if you take enough people, of course, in a short time, with a small sample, then a few people will hit the jackpot. But over the long term, the casino will always benefit from the parameters of the games they invite you to play. This is an important probability principle that is intuitive. Although sometimes, when it is formally explained to you with random variables, it all looks a little confusing. All this law says is that the more samples there are, the more the arithmetic mean of those samples will converge towards the true mean. And to be more specific, the arithmetic mean of your sample will converge with the mathematical expectation of a random variable. That's all. See you in the next video!

Weak law of large numbers

The weak law of large numbers is also called Bernoulli's theorem, after Jacob Bernoulli, who proved it in 1713.

Let there be an infinite sequence (consecutive enumeration) of identically distributed and uncorrelated random variables . That is, their covariance c o v (X i , X j) = 0 , ∀ i ≠ j (\displaystyle \mathrm (cov) (X_(i),X_(j))=0,\;\forall i\not =j). Let . Denote by sample average first n (\displaystyle n) members:

.

Then X ¯ n → P μ (\displaystyle (\bar (X))_(n)\to ^(\!\!\!\!\!\!\mathbb (P) )\mu ).

That is, for every positive ε (\displaystyle \varepsilon )

lim n → ∞ Pr (| X ¯ n − μ |< ε) = 1. {\displaystyle \lim _{n\to \infty }\Pr \!\left(\,|{\bar {X}}_{n}-\mu |<\varepsilon \,\right)=1.}

Strong law of large numbers

Let there be an infinite sequence of independent identically distributed random variables ( X i ) i = 1 ∞ (\displaystyle \(X_(i)\)_(i=1)^(\infty )) defined on one probability space (Ω , F , P) (\displaystyle (\Omega ,(\mathcal (F)),\mathbb (P))). Let E X i = μ , ∀ i ∈ N (\displaystyle \mathbb (E) X_(i)=\mu ,\;\forall i\in \mathbb (N) ). Denote by X¯n (\displaystyle (\bar(X))_(n)) sample mean of the first n (\displaystyle n) members:

X ¯ n = 1 n ∑ i = 1 n X i , n ∈ N (\displaystyle (\bar (X))_(n)=(\frac (1)(n))\sum \limits _(i= 1)^(n)X_(i),\;n\in \mathbb (N) ).

Then X ¯ n → μ (\displaystyle (\bar (X))_(n)\to \mu ) almost always.

Pr (lim n → ∞ X ¯ n = μ) = 1. (\displaystyle \Pr \!\left(\lim _(n\to \infty )(\bar (X))_(n)=\mu \ right)=1.) .

Like any mathematical law, the law of large numbers can only be applied to the real world under known assumptions, which can only be met with some degree of accuracy. So, for example, the conditions of successive tests often cannot be maintained indefinitely and with absolute accuracy. In addition, the law of large numbers only speaks of improbability significant deviation of the mean value from the mathematical expectation.