Essence of the method least squares is in finding the parameters of a trend model that best describes the tendency of development of any random phenomenon in time or space (a trend is a line that characterizes the tendency of this development). The task of the least squares method (LSM) comes down to finding not just some trend model, but to finding the best or optimal model. This model will be optimal if the sum of square deviations between the observed actual values ​​and the corresponding calculated trend values ​​is minimal (smallest):

Where - standard deviation between observed actual value

and the corresponding calculated trend value,

The actual (observed) value of the phenomenon being studied,

The calculated value of the trend model,

The number of observations of the phenomenon being studied.

MNC is used quite rarely on its own. As a rule, most often it is used only as a necessary technical technique in correlation studies. It should be remembered that the information basis of an MNC can only be reliable statistical series, and the number of observations should not be less than 4, otherwise the OLS smoothing procedures may lose common sense.

The MNC toolkit boils down to the following procedures:

First procedure. It turns out whether there is any tendency at all to change the resultant attribute when the selected factor-argument changes, or in other words, is there a connection between “ at " And " X ».

Second procedure. It is determined which line (trajectory) can best describe or characterize this trend.

Third procedure.

Example. Let's say we have information about the average sunflower yield for the farm under study (Table 9.1).

Table 9.1

Observation number

Productivity, c/ha

Since the level of technology in sunflower production in our country has remained virtually unchanged over the past 10 years, it means that, apparently, fluctuations in yield during the analyzed period were very much dependent on fluctuations in weather and climatic conditions. Is this really true?

First OLS procedure. The hypothesis about the existence of a trend in sunflower yield changes depending on changes in weather and climatic conditions over the analyzed 10 years is tested.

IN in this example behind " y " it is advisable to take the sunflower yield, and for " x » – number of the observed year in the analyzed period. Testing the hypothesis about the existence of any relationship between " x " And " y » can be done in two ways: manually and using computer programs. Of course, if available computer equipment this problem resolves itself. But in order to better understand the MNC tools, it is advisable to test the hypothesis about the existence of a relationship between “ x " And " y » manually, when only a pen and an ordinary calculator are at hand. In such cases, the hypothesis about the existence of a trend is best checked visually by the location of the graphical image of the analyzed series of dynamics - the correlation field:

The correlation field in our example is located around a slowly increasing line. This in itself indicates the existence of a certain trend in changes in sunflower yields. It is impossible to talk about the presence of any tendency only when the correlation field looks like a circle, a circle, a strictly vertical or strictly horizontal cloud, or consists of chaotically scattered points. In all other cases, the hypothesis about the existence of a relationship between “ x " And " y ", and continue research.

Second OLS procedure. It is determined which line (trajectory) can best describe or characterize the trend of changes in sunflower yield over the analyzed period.

If you have computer technology, the selection of the optimal trend occurs automatically. In “manual” processing, the selection of the optimal function is carried out, as a rule, visually - by the location of the correlation field. That is, based on the type of graph, the equation of the line that best fits the empirical trend (the actual trajectory) is selected.

As is known, in nature there is a huge variety of functional dependencies, so it is extremely difficult to visually analyze even a small part of them. Fortunately, in real economic practice, most relationships can be described quite accurately either by a parabola, or a hyperbola, or a straight line. In this regard, with the “manual” option of selecting the best function, you can limit yourself to only these three models.

Hyperbola:

Second order parabola: :

It is easy to see that in our example, the trend in sunflower yield changes over the analyzed 10 years is best characterized by a straight line, so the regression equation will be the equation of a straight line.

Third procedure. Parameters are calculated regression equation characterizing a given line, or in other words, an analytical formula is determined that describes best model trend.

Finding the values ​​of the parameters of the regression equation, in our case the parameters and , is the core of the OLS. This process comes down to solving a system of normal equations.

(9.2)

This system of equations can be solved quite easily by the Gauss method. Let us recall that as a result of the solution, in our example, the values ​​of the parameters and are found. Thus, the found regression equation will have next view:

Example.

Experimental data on the values ​​of variables X And at are given in the table.

As a result of their alignment, the function is obtained

Using least square method, approximate these data by a linear dependence y=ax+b(find parameters A And b). Find out which of the two lines better (in the sense of the least squares method) aligns the experimental data. Make a drawing.

The essence of the least squares method (LSM).

The task is to find the linear dependence coefficients at which the function of two variables A And b accepts smallest value. That is, given A And b the sum of squared deviations of the experimental data from the found straight line will be the smallest. This is the whole point of the least squares method.

Thus, solving the example comes down to finding the extremum of a function of two variables.

Deriving formulas for finding coefficients.

A system of two equations with two unknowns is compiled and solved. Finding partial derivatives of a function with respect to variables A And b, we equate these derivatives to zero.

We solve the resulting system of equations using any method (for example by substitution method or ) and obtain formulas for finding coefficients using the least squares method (LSM).

Given A And b function takes the smallest value. The proof of this fact is given.

That's the whole method of least squares. Formula for finding the parameter a contains the sums , , , and parameter n- amount of experimental data. We recommend calculating the values ​​of these amounts separately. Coefficient b found after calculation a.

It's time to remember the original example.

Solution.

In our example n=5. We fill out the table for the convenience of calculating the amounts that are included in the formulas of the required coefficients.

The values ​​in the fourth row of the table are obtained by multiplying the values ​​of the 2nd row by the values ​​of the 3rd row for each number i.

The values ​​in the fifth row of the table are obtained by squaring the values ​​in the 2nd row for each number i.

The values ​​in the last column of the table are the sums of the values ​​across the rows.

We use the formulas of the least squares method to find the coefficients A And b. We substitute the corresponding values ​​from the last column of the table into them:

Hence, y = 0.165x+2.184- the desired approximating straight line.

It remains to find out which of the lines y = 0.165x+2.184 or better approximates the original data, that is, makes an estimate using the least squares method.

Error estimation of the least squares method.

To do this, you need to calculate the sum of squared deviations of the original data from these lines And , a smaller value corresponds to a line that better approximates the original data in the sense of the least squares method.

Since , then straight y = 0.165x+2.184 better approximates the original data.

Graphic illustration of the least squares (LS) method.

Everything is clearly visible on the graphs. The red line is the found straight line y = 0.165x+2.184, the blue line is , pink dots are the original data.

Why is this needed, why all these approximations?

I personally use it to solve problems of data smoothing, interpolation and extrapolation problems (in the original example they might be asked to find the value of an observed value y at x=3 or when x=6 using the least squares method). But we’ll talk more about this later in another section of the site.

Proof.

So that when found A And b function takes the smallest value, it is necessary that at this point the matrix of the quadratic form of the second order differential for the function was positive definite. Let's show it.

Least square method used to estimate the parameters of the regression equation.
Number of lines (source data)

One of the methods for studying stochastic relationships between characteristics is regression analysis.
Regression analysis is the derivation of a regression equation, with the help of which the average value of a random variable (result attribute) is found if the value of another (or other) variables (factor-attributes) is known. It includes the following steps:

  1. selection of the form of connection (type of analytical regression equation);
  2. estimation of equation parameters;
  3. assessment of the quality of the analytical regression equation.
Most often, a linear form is used to describe the statistical relationship of features. The focus on linear relationships is explained by the clear economic interpretation of its parameters, the limited variation of variables, and the fact that in most cases nonlinear forms of relationships are converted (by logarithm or substitution of variables) into a linear form to perform calculations.
In the case of a linear pairwise relationship, the regression equation will take the form: y i =a+b·x i +u i . Options given equation a and b are estimated from the data statistical observation x and y. The result of such an assessment is the equation: , where , are estimates of parameters a and b , is the value of the resulting attribute (variable) obtained from the regression equation (calculated value).

Most often used to estimate parameters least squares method (LSM).
The least squares method provides the best (consistent, efficient, and unbiased) estimates of the parameters of the regression equation. But only if certain assumptions regarding the random term (u) and the independent variable (x) are met (see OLS assumptions).

The problem of estimating the parameters of a linear pair equation using the least squares method is as follows: to obtain such estimates of parameters , , at which the sum of squared deviations of the actual values ​​of the resultant characteristic - y i from the calculated values ​​- is minimal.
Formally OLS criterion can be written like this: .

Classification of least squares methods

  1. Least square method.
  2. Maximum likelihood method (for a normal classical linear regression model, normality of regression residuals is postulated).
  3. The generalized least squares OLS method is used in the case of autocorrelation of errors and in the case of heteroscedasticity.
  4. Weighted least squares method ( special case OLS with heteroscedastic residuals).

Let's illustrate the point classical least squares method graphically. To do this, we will construct a scatter plot based on observational data (x i, y i, i=1;n) in a rectangular coordinate system (such a scatter plot is called a correlation field). Let's try to select a straight line that is closest to the points of the correlation field. According to the least squares method, the line is selected so that the sum of the squares of the vertical distances between the points of the correlation field and this line is minimal.

Mathematical notation for this problem: .
The values ​​of y i and x i =1...n are known to us; these are observational data. In the S function they represent constants. The variables in this function are the required estimates of the parameters - , . To find the minimum of a function of two variables, it is necessary to calculate the partial derivatives of this function for each of the parameters and equate them to zero, i.e. .
As a result, we obtain a system of 2 normal linear equations:
Solving this system, we find the required parameter estimates:

The correctness of the calculation of the parameters of the regression equation can be checked by comparing the amounts (there may be some discrepancy due to rounding of calculations).
To calculate parameter estimates, you can build Table 1.
The sign of the regression coefficient b indicates the direction of the relationship (if b >0, the relationship is direct, if b<0, то связь обратная). Величина b показывает на сколько единиц изменится в среднем признак-результат -y при изменении признака-фактора - х на 1 единицу своего измерения.
Formally, the value of parameter a is the average value of y with x equal to zero. If the attribute-factor does not and cannot have a zero value, then the above interpretation of parameter a does not make sense.

Assessing the closeness of the relationship between characteristics carried out using the linear pair correlation coefficient - r x,y. It can be calculated using the formula: . In addition, the linear pair correlation coefficient can be determined through the regression coefficient b: .
The range of acceptable values ​​of the linear pair correlation coefficient is from –1 to +1. The sign of the correlation coefficient indicates the direction of the relationship. If r x, y >0, then the connection is direct; if r x, y<0, то связь обратная.
If this coefficient is close to unity in magnitude, then the relationship between the characteristics can be interpreted as a fairly close linear one. If its module is equal to one ê r x , y ê =1, then the relationship between the characteristics is functional linear. If features x and y are linearly independent, then r x,y is close to 0.
To calculate r x,y, you can also use Table 1.

Table 1

N observationsx iy ix i ∙y i
1 x 1y 1x 1 y 1
2 x 2y 2x 2 y 2
...
nx ny nx n y n
Column Sum∑x∑y∑xy
Average value
To assess the quality of the resulting regression equation, calculate the theoretical coefficient of determination - R 2 yx:

,
where d 2 is the variance of y explained by the regression equation;
e 2 - residual (unexplained by the regression equation) variance of y;
s 2 y - total (total) variance of y.
The coefficient of determination characterizes the proportion of variation (dispersion) of the resultant attribute y explained by regression (and, consequently, factor x) in the total variation (dispersion) y. The coefficient of determination R 2 yx takes values ​​from 0 to 1. Accordingly, the value 1-R 2 yx characterizes the proportion of variance y caused by the influence of other factors not taken into account in the model and specification errors.
With paired linear regression, R 2 yx =r 2 yx.
  • Tutorial

Introduction

I am a mathematician and programmer. The biggest leap I took in my career was when I learned to say: "I do not understand anything!" Now I am not ashamed to tell the luminary of science that he is giving me a lecture, that I do not understand what he, the luminary, is telling me. And it's very difficult. Yes, admitting your ignorance is difficult and embarrassing. Who likes to admit that he doesn’t know the basics of something? Due to my profession, I have to attend a large number of presentations and lectures, where, I admit, in the vast majority of cases I want to sleep because I do not understand anything. But I don’t understand because the huge problem of the current situation in science lies in mathematics. It assumes that all listeners are familiar with absolutely all areas of mathematics (which is absurd). Admitting that you don’t know what a derivative is (we’ll talk about what it is a little later) is shameful.

But I've learned to say that I don't know what multiplication is. Yes, I don't know what a subalgebra over a Lie algebra is. Yes, I don’t know why quadratic equations are needed in life. By the way, if you are sure that you know, then we have something to talk about! Mathematics is a series of tricks. Mathematicians try to confuse and intimidate the public; where there is no confusion, there is no reputation, no authority. Yes, it is prestigious to speak in as abstract a language as possible, which is complete nonsense.

Do you know what a derivative is? Most likely you will tell me about the limit of the difference ratio. In the first year of mathematics and mechanics at St. Petersburg State University, Viktor Petrovich Khavin told me determined derivative as the coefficient of the first term of the Taylor series of the function at a point (this was a separate gymnastics to determine the Taylor series without derivatives). I laughed at this definition for a long time until I finally understood what it was about. The derivative is nothing more than a simple measure of how similar the function we are differentiating is to the function y=x, y=x^2, y=x^3.

I now have the honor of lecturing to students who afraid mathematics. If you are afraid of mathematics, we are on the same path. As soon as you try to read some text and it seems to you that it is overly complicated, then know that it is poorly written. I assert that there is not a single area of ​​mathematics that cannot be discussed “on the fingers” without losing accuracy.

Assignment for the near future: I assigned my students to understand what a linear quadratic regulator is. Don’t be shy, spend three minutes of your life and follow the link. If you don’t understand anything, then we are on the same path. I (a professional mathematician-programmer) didn’t understand anything either. And I assure you, you can figure this out “on your fingers.” At the moment I don't know what it is, but I assure you that we will be able to figure it out.

So, the first lecture that I am going to give to my students after they come running to me in horror and say that a linear-quadratic regulator is a terrible thing that you will never master in your life is least squares methods. Can you solve linear equations? If you are reading this text, then most likely not.

So, given two points (x0, y0), (x1, y1), for example, (1,1) and (3,2), the task is to find the equation of the line passing through these two points:

illustration

This line should have an equation like the following:

Here alpha and beta are unknown to us, but two points of this line are known:

We can write this equation in matrix form:

Here we should make a lyrical digression: what is a matrix? A matrix is ​​nothing more than a two-dimensional array. This is a way of storing data; no further meanings should be attached to it. It depends on us exactly how to interpret a certain matrix. Periodically I will interpret it as a linear mapping, periodically as a quadratic form, and sometimes simply as a set of vectors. This will all be clarified in context.

Let's replace concrete matrices with their symbolic representation:

Then (alpha, beta) can be easily found:

More specifically for our previous data:

Which leads to the following equation of the line passing through the points (1,1) and (3,2):

Okay, everything is clear here. Let's find the equation of the line passing through three points: (x0,y0), (x1,y1) and (x2,y2):

Oh-oh-oh, but we have three equations for two unknowns! A standard mathematician will say that there is no solution. What will the programmer say? And he will first rewrite the previous system of equations in the following form:

In our case, the vectors i, j, b are three-dimensional, therefore (in the general case) there is no solution to this system. Any vector (alpha\*i + beta\*j) lies in the plane spanned by the vectors (i, j). If b does not belong to this plane, then there is no solution (equality cannot be achieved in the equation). What to do? Let's look for a compromise. Let's denote by e(alpha, beta) exactly how far we have not achieved equality:

And we will try to minimize this error:

Why square?

We are looking not just for the minimum of the norm, but for the minimum of the square of the norm. Why? The minimum point itself coincides, and the square gives a smooth function (a quadratic function of the arguments (alpha, beta)), while simply the length gives a cone-shaped function, non-differentiable at the minimum point. Brr. A square is more convenient.

Obviously, the error is minimized when the vector e orthogonal to the plane spanned by the vectors i And j.

Illustration

In other words: we are looking for a straight line such that the sum of the squared lengths of the distances from all points to this straight line is minimal:

UPDATE: I have a problem here, the distance to the straight line should be measured vertically, and not by orthogonal projection. This commentator is right.

Illustration

In completely different words (carefully, poorly formalized, but it should be clear): we take all possible lines between all pairs of points and look for the average line between all:

Illustration

Another explanation is straightforward: we attach a spring between all data points (here we have three) and the straight line that we are looking for, and the straight line of the equilibrium state is exactly what we are looking for.

Minimum quadratic form

So, given this vector b and a plane spanned by the column vectors of the matrix A(in this case (x0,x1,x2) and (1,1,1)), we are looking for the vector e with a minimum square of length. Obviously, the minimum is achievable only for the vector e, orthogonal to the plane spanned by the column vectors of the matrix A:

In other words, we are looking for a vector x=(alpha, beta) such that:

Let me remind you that this vector x=(alpha, beta) is the minimum of the quadratic function ||e(alpha, beta)||^2:

Here it would be useful to remember that the matrix can be interpreted also as a quadratic form, for example, the identity matrix ((1,0),(0,1)) can be interpreted as a function x^2 + y^2:

quadratic form

All this gymnastics is known under the name linear regression.

Laplace's equation with Dirichlet boundary condition

Now the simplest real task: there is a certain triangulated surface, it is necessary to smooth it. For example, let's load a model of my face:

The original commit is available. To minimize external dependencies, I took the code of my software renderer, already on Habré. To solve a linear system, I use OpenNL, this is an excellent solver, which, however, is very difficult to install: you need to copy two files (.h+.c) to the folder with your project. All smoothing is done with the following code:

For (int d=0; d<3; d++) { nlNewContext(); nlSolverParameteri(NL_NB_VARIABLES, verts.size()); nlSolverParameteri(NL_LEAST_SQUARES, NL_TRUE); nlBegin(NL_SYSTEM); nlBegin(NL_MATRIX); for (int i=0; i<(int)verts.size(); i++) { nlBegin(NL_ROW); nlCoefficient(i, 1); nlRightHandSide(verts[i][d]); nlEnd(NL_ROW); } for (unsigned int i=0; i&face = faces[i]; for (int j=0; j<3; j++) { nlBegin(NL_ROW); nlCoefficient(face[ j ], 1); nlCoefficient(face[(j+1)%3], -1); nlEnd(NL_ROW); } } nlEnd(NL_MATRIX); nlEnd(NL_SYSTEM); nlSolve(); for (int i=0; i<(int)verts.size(); i++) { verts[i][d] = nlGetVariable(i); } }

X, Y and Z coordinates are separable, I smooth them separately. That is, I solve three systems of linear equations, each with a number of variables equal to the number of vertices in my model. The first n rows of matrix A have only one 1 per row, and the first n rows of vector b have the original model coordinates. That is, I tie a spring between the new position of the vertex and the old position of the vertex - the new ones should not move too far from the old ones.

All subsequent rows of matrix A (faces.size()*3 = number of edges of all triangles in the mesh) have one occurrence of 1 and one occurrence of -1, with the vector b having zero components opposite. This means I put a spring on each edge of our triangular mesh: all edges try to get the same vertex as their starting and ending point.

Once again: all vertices are variables, and they cannot move far from their original position, but at the same time they try to become similar to each other.

Here's the result:

Everything would be fine, the model is really smoothed, but it has moved away from its original edge. Let's change the code a little:

For (int i=0; i<(int)verts.size(); i++) { float scale = border[i] ? 1000: 1; nlBegin(NL_ROW); nlCoefficient(i, scale); nlRightHandSide(scale*verts[i][d]); nlEnd(NL_ROW); }

In our matrix A, for the vertices that are on the edge, I add not a row from the category v_i = verts[i][d], but 1000*v_i = 1000*verts[i][d]. What does it change? And this changes our quadratic form of error. Now a single deviation from the top at the edge will cost not one unit, as before, but 1000*1000 units. That is, we hung a stronger spring on the extreme vertices, the solution will prefer to stretch the others more strongly. Here's the result:

Let's double the spring strength between the vertices:
nlCoefficient(face[ j ], 2); nlCoefficient(face[(j+1)%3], -2);

It is logical that the surface has become smoother:

And now even a hundred times stronger:

What is this? Imagine that we have dipped a wire ring in soapy water. As a result, the resulting soap film will try to have the least curvature as possible, touching the border - our wire ring. This is exactly what we got by fixing the border and asking for a smooth surface inside. Congratulations, we have just solved Laplace's equation with Dirichlet boundary conditions. Sounds cool? But in reality, you just need to solve one system of linear equations.

Poisson's equation

Let's remember another cool name.

Let's say I have an image like this:

Looks good to everyone, but I don’t like the chair.

I'll cut the picture in half:



And I will select a chair with my hands:

Then I will pull everything that is white in the mask to the left side of the picture, and at the same time throughout the picture I will say that the difference between two neighboring pixels should be equal to the difference between two neighboring pixels of the right picture:

For (int i=0; i

Here's the result:

Code and pictures available

Which finds the widest application in various fields of science and practical activity. This could be physics, chemistry, biology, economics, sociology, psychology, and so on and so forth. By the will of fate, I often have to deal with the economy, and therefore today I will arrange for you a trip to an amazing country called Econometrics=) ...How can you not want it?! It’s very good there – you just need to make up your mind! ...But what you probably definitely want is to learn how to solve problems least squares method. And especially diligent readers will learn to solve them not only accurately, but also VERY QUICKLY ;-) But first general statement of the problem+ accompanying example:

Let us study indicators in a certain subject area that have a quantitative expression. At the same time, there is every reason to believe that the indicator depends on the indicator. This assumption can be either a scientific hypothesis or based on basic common sense. Let's leave science aside, however, and explore more appetizing areas - namely, grocery stores. Let's denote by:

– retail area of ​​a grocery store, sq.m.,
– annual turnover of a grocery store, million rubles.

It is absolutely clear that the larger the store area, the greater in most cases its turnover will be.

Suppose that after carrying out observations/experiments/calculations/dances with a tambourine we have numerical data at our disposal:

With grocery stores, I think everything is clear: - this is the area of ​​the 1st store, - its annual turnover, - the area of ​​the 2nd store, - its annual turnover, etc. By the way, it is not at all necessary to have access to classified materials - a fairly accurate assessment of trade turnover can be obtained by means of mathematical statistics. However, let’s not get distracted, the commercial espionage course is already paid =)

Tabular data can also be written in the form of points and depicted in the familiar form Cartesian system .

Let's answer an important question: How many points are needed for a qualitative study?

The bigger, the better. The minimum acceptable set consists of 5-6 points. In addition, when the amount of data is small, “anomalous” results cannot be included in the sample. So, for example, a small elite store can earn orders of magnitude more than “its colleagues,” thereby distorting the general pattern that you need to find!

To put it very simply, we need to select a function, schedule which passes as close as possible to the points . This function is called approximating (approximation - approximation) or theoretical function . Generally speaking, an obvious “contender” immediately appears here - a high-degree polynomial, the graph of which passes through ALL points. But this option is complicated and often simply incorrect. (since the graph will “loop” all the time and poorly reflect the main trend).

Thus, the sought function must be quite simple and at the same time adequately reflect the dependence. As you might guess, one of the methods for finding such functions is called least squares method. First, let's look at its essence in general terms. Let some function approximate experimental data:


How to evaluate the accuracy of this approximation? Let us also calculate the differences (deviations) between the experimental and functional values (we study the drawing). The first thought that comes to mind is to estimate how large the sum is, but the problem is that the differences can be negative (For example, ) and deviations as a result of such summation will cancel each other out. Therefore, as an estimate of the accuracy of the approximation, it begs to take the sum modules deviations:

or collapsed: (in case anyone doesn’t know: – this is the sum icon, and – an auxiliary “counter” variable, which takes values ​​from 1 to ).

By approximating experimental points with different functions, we will obtain different values, and obviously, where this sum is smaller, that function is more accurate.

Such a method exists and it is called least modulus method. However, in practice it has become much more widespread least square method, in which possible negative values ​​are eliminated not by the module, but by squaring the deviations:

, after which efforts are aimed at selecting a function such that the sum of squared deviations was as small as possible. Actually, this is where the name of the method comes from.

And now we return to another important point: as noted above, the selected function should be quite simple - but there are also many such functions: linear , hyperbolic, exponential, logarithmic, quadratic etc. And, of course, here I would immediately like to “reduce the field of activity.” Which class of functions should I choose for research? A primitive but effective technique:

– The easiest way is to depict points on the drawing and analyze their location. If they tend to run in a straight line, then you should look for equation of a line with optimal values ​​and . In other words, the task is to find SUCH coefficients so that the sum of squared deviations is the smallest.

If the points are located, for example, along hyperbole, then it is obviously clear that the linear function will give a poor approximation. In this case, we are looking for the most “favorable” coefficients for the hyperbola equation – those that give the minimum sum of squares .

Now note that in both cases we are talking about functions of two variables, whose arguments are searched dependency parameters:

And essentially we need to solve a standard problem - find minimum function of two variables.

Let's remember our example: suppose that “store” points tend to be located in a straight line and there is every reason to believe that linear dependence turnover from retail space. Let's find SUCH coefficients “a” and “be” such that the sum of squared deviations was the smallest. Everything is as usual - first 1st order partial derivatives. According to linearity rule You can differentiate right under the sum icon:

If you want to use this information for an essay or term paper, I will be very grateful for the link in the list of sources; you will find such detailed calculations in few places:

Let's create a standard system:

We reduce each equation by “two” and, in addition, “break up” the sums:

Note : independently analyze why “a” and “be” can be taken out beyond the sum icon. By the way, formally this can be done with the sum

Let's rewrite the system in “applied” form:

after which the algorithm for solving our problem begins to emerge:

Do we know the coordinates of the points? We know. Amounts can we find it? Easily. Let's make the simplest system of two linear equations in two unknowns(“a” and “be”). We solve the system, for example, Cramer's method, as a result of which we obtain a stationary point. Checking sufficient condition for an extremum, we can verify that at this point the function reaches exactly minimum. The check involves additional calculations and therefore we will leave it behind the scenes (if necessary, the missing frame can be viewed). We draw the final conclusion:

Function the best way (at least compared to any other linear function) brings experimental points closer . Roughly speaking, its graph passes as close as possible to these points. In tradition econometrics the resulting approximating function is also called paired linear regression equation .

The problem under consideration is of great practical importance. In our example situation, Eq. allows you to predict what trade turnover ("Igrek") the store will have at one or another value of the sales area (one or another meaning of “x”). Yes, the resulting forecast will only be a forecast, but in many cases it will turn out to be quite accurate.

I will analyze just one problem with “real” numbers, since there are no difficulties in it - all calculations are at the level of the 7th-8th grade school curriculum. In 95 percent of cases, you will be asked to find just a linear function, but at the very end of the article I will show that it is no more difficult to find the equations of the optimal hyperbola, exponential and some other functions.

In fact, all that remains is to distribute the promised goodies - so that you can learn to solve such examples not only accurately, but also quickly. We carefully study the standard:

Task

As a result of studying the relationship between two indicators, the following pairs of numbers were obtained:

Using the least squares method, find the linear function that best approximates the empirical (experienced) data. Make a drawing on which to construct experimental points and a graph of the approximating function in a Cartesian rectangular coordinate system . Find the sum of squared deviations between the empirical and theoretical values. Find out if the feature would be better (from the point of view of the least squares method) bring experimental points closer.

Please note that the “x” meanings are natural, and this has a characteristic meaningful meaning, which I will talk about a little later; but they, of course, can also be fractional. In addition, depending on the content of a particular task, both “X” and “game” values ​​can be completely or partially negative. Well, we have been given a “faceless” task, and we begin it solution:

We find the coefficients of the optimal function as a solution to the system:

For the purpose of more compact recording, the “counter” variable can be omitted, since it is already clear that the summation is carried out from 1 to .

It is more convenient to calculate the required amounts in tabular form:


Calculations can be carried out on a microcalculator, but it is much better to use Excel - both faster and without errors; watch a short video:

Thus, we get the following system:

Here you can multiply the second equation by 3 and subtract the 2nd from the 1st equation term by term. But this is luck - in practice, systems are often not a gift, and in such cases it saves Cramer's method:
, which means the system has a unique solution.

Let's check. I understand that you don’t want to, but why skip errors where they can absolutely not be missed? Let us substitute the found solution into the left side of each equation of the system:

The right-hand sides of the corresponding equations are obtained, which means that the system is solved correctly.

Thus, the desired approximating function: – from all linear functions It is she who best approximates the experimental data.

Unlike straight dependence of the store's turnover on its area, the found dependence is reverse (principle “the more, the less”), and this fact is immediately revealed by the negative slope. Function tells us that with an increase in a certain indicator by 1 unit, the value of the dependent indicator decreases average by 0.65 units. As they say, the higher the price of buckwheat, the less it is sold.

To plot the graph of the approximating function, we find its two values:

and execute the drawing:


The constructed straight line is called trend line (namely, a linear trend line, i.e. in the general case, a trend is not necessarily a straight line). Everyone is familiar with the expression “to be in trend,” and I think that this term does not need additional comments.

Let's calculate the sum of squared deviations between empirical and theoretical values. Geometrically, this is the sum of the squares of the lengths of the “raspberry” segments (two of which are so small that they are not even visible).

Let's summarize the calculations in a table:


Again, they can be done manually; just in case, I’ll give an example for the 1st point:

but it is much more effective to do it in the already known way:

We repeat once again: What is the meaning of the result obtained? From all linear functions y function the indicator is the smallest, that is, in its family it is the best approximation. And here, by the way, the final question of the problem is not accidental: what if the proposed exponential function would it be better to bring the experimental points closer?

Let's find the corresponding sum of squared deviations - to distinguish, I will denote them by the letter “epsilon”. The technique is exactly the same:


And again, just in case, the calculations for the 1st point:

In Excel we use the standard function EXP (syntax can be found in Excel Help).

Conclusion: , which means that the exponential function approximates the experimental points worse than a straight line .

But here it should be noted that “worse” is doesn't mean yet, what is wrong. Now I have built a graph of this exponential function - and it also passes close to the points - so much so that without analytical research it is difficult to say which function is more accurate.

This concludes the solution, and I return to the question of the natural values ​​of the argument. In various studies, usually economic or sociological, natural “X’s” are used to number months, years or other equal time intervals. Consider, for example, the following problem.