Finding inverse matrix - a problem that is often solved by two methods:

  • the method of algebraic additions, which requires finding determinants and transposing matrices;
  • by elimination unknown Gauss, in which it is necessary to perform elementary transformations of matrices (add rows, multiply rows by the same number, etc.).

For those who are especially curious, there are other methods, for example, the method of linear transformations. In this lesson we will analyze the three mentioned methods and algorithms for finding the inverse matrix using these methods.

Inverse matrix A, such a matrix is ​​called

A
. (1)

Inverse matrix , which needs to be found for a given square matrix A, such a matrix is ​​called

the product of which the matrices A on the right is the identity matrix, i.e.
. (1)

An identity matrix is ​​a diagonal matrix in which all diagonal elements are equal to one.

Theorem.For every non-singular (non-degenerate, non-singular) square matrix, one can find an inverse matrix, and only one. For a special (degenerate, singular) square matrix, the inverse matrix does not exist.

The square matrix is ​​called not special(or non-degenerate, non-singular), if its determinant is not zero, and special(or degenerate, singular) if its determinant is zero.

The inverse of a matrix can only be found for a square matrix. Naturally, the inverse matrix will also be square and of the same order as the given matrix. A matrix for which an inverse matrix can be found is called an invertible matrix.

For inverse matrix There is a relevant analogy with the inverse of a number. For every number a, not equal to zero, there is such a number b that the work a And b equals one: ab= 1 . Number b called the inverse of a number b. For example, for the number 7 the reciprocal is 1/7, since 7*1/7=1.

Finding the inverse matrix using the method of algebraic additions (allied matrix)

For a non-singular square matrix A the inverse is the matrix

where is the determinant of the matrix A, a is a matrix allied with the matrix A.

Allied with a square matrix A is a matrix of the same order, the elements of which are the algebraic complements of the corresponding elements of the determinant of the matrix transposed with respect to the matrix A. Thus, if

That

And

Algorithm for finding the inverse matrix using the method of algebraic additions

1. Find the determinant of this matrix A. If the determinant is equal to zero, finding the inverse matrix stops, since the matrix is ​​singular and its inverse does not exist.

2. Find the matrix transposed with respect to A.

3. Calculate elements union matrix as algebraic complements of the maritz found in step 2.

4. Apply formula (2): multiply the inverse of the matrix determinant A, to the union matrix found in step 4.

5. Check the result obtained in step 4 by multiplying this matrix A to the inverse matrix. If the product of these matrices is equal to the identity matrix, then the inverse matrix was found correctly. Otherwise, start the solution process again.

Example 1. For matrix

find the inverse matrix.

Solution. To find the inverse matrix, you need to find the determinant of the matrix A. We find by the rule of triangles:

Therefore, the matrix A– non-singular (non-degenerate, non-singular) and there is an inverse for it.

Let's find a matrix allied to this matrix A.

Let's find the matrix transposed with respect to the matrix A:

We calculate the elements of the allied matrix as algebraic complements of the matrix transposed with respect to the matrix A:

Therefore, the matrix allied to the matrix A, has the form

Comment. The order in which the elements are calculated and the matrix is ​​transposed may be different. You can first calculate the algebraic complements of the matrix A, and then transpose the algebraic complement matrix. The result should be the same elements of the union matrix.

Applying formula (2), we find the matrix inverse to the matrix A:

Finding the inverse matrix using the Gaussian unknown elimination method

The first step to find the inverse of a matrix using the Gaussian elimination method is to assign to the matrix A identity matrix of the same order, separating them with a vertical bar. We will get a dual matrix. Let's multiply both sides of this matrix by , then we get

,

Algorithm for finding the inverse matrix using the Gaussian unknown elimination method

1. To the matrix A assign an identity matrix of the same order.

2. Transform the resulting dual matrix so that on its left side you get a unit matrix, then on the right side, in place of the identity matrix, you automatically get an inverse matrix. Matrix A on the left side is transformed into the identity matrix by elementary matrix transformations.

2. If in the process of matrix transformation A in the identity matrix there will be only zeros in any row or in any column, then the determinant of the matrix is ​​equal to zero, and, consequently, the matrix A will be singular, and it does not have an inverse matrix. In this case, further determination of the inverse matrix stops.

Example 2. For matrix

find the inverse matrix.

and we will transform it so that on the left side we get an identity matrix. We begin the transformation.

Multiply the first row of the left and right matrix by (-3) and add it to the second row, and then multiply the first row by (-4) and add it to the third row, then we get

.

So that if possible there is no fractional numbers during subsequent transformations, we will first create a unit in the second row on the left side of the dual matrix. To do this, multiply the second line by 2 and subtract the third line from it, then we get

.

Let's add the first line with the second, and then multiply the second line by (-9) and add it with the third line. Then we get

.

Divide the third line by 8, then

.

Multiply the third line by 2 and add it to the second line. It turns out:

.

Let's swap the second and third lines, then we finally get:

.

We see that on the left side we have the identity matrix, therefore, on the right side we have the inverse matrix. Thus:

.

You can check the correctness of the calculations by multiplying the original matrix by the found inverse matrix:

The result should be an inverse matrix.

Example 3. For matrix

find the inverse matrix.

Solution. Compiling a dual matrix

and we will transform it.

We multiply the first line by 3, and the second by 2, and subtract from the second, and then we multiply the first line by 5, and the third by 2 and subtract from the third line, then we get

.

We multiply the first line by 2 and add it to the second, and then subtract the second from the third line, then we get

.

We see that in the third line on the left side all elements are equal to zero. Therefore, the matrix is ​​singular and has no inverse matrix. We stop further finding the inverse maritz.

Definition 1: a matrix is ​​called singular if its determinant is zero.

Definition 2: a matrix is ​​called non-singular if its determinant is not equal to zero.

Matrix "A" is called inverse matrix, if the condition A*A-1 = A-1 *A = E (unit matrix) is satisfied.

A square matrix is ​​invertible only if it is non-singular.

Scheme for calculating the inverse matrix:

1) Calculate the determinant of matrix "A" if A = 0, then the inverse matrix does not exist.

2) Find all algebraic complements of matrix "A".

3) Create a matrix of algebraic additions (Aij)

4) Transpose the matrix of algebraic complements (Aij )T

5) Multiply the transposed matrix by the inverse of the determinant of this matrix.

6) Perform check:

At first glance it may seem complicated, but in fact everything is very simple. All solutions are based on simple arithmetic operations, the main thing when solving is not to get confused with the “-” and “+” signs and not to lose them.

Now let’s solve a practical task together by calculating the inverse matrix.

Task: find the inverse matrix "A" shown in the picture below:

We solve everything exactly as indicated in the plan for calculating the inverse matrix.

1. The first thing to do is to find the determinant of matrix "A":

Explanation:

We have simplified our determinant using its basic functions. First, we added to the 2nd and 3rd lines the elements of the first line, multiplied by one number.

Secondly, we changed the 2nd and 3rd columns of the determinant, and according to its properties, we changed the sign in front of it.

Thirdly, we took out the common factor (-1) of the second line, thereby changing the sign again, and it became positive. We also simplified line 3 in the same way as at the very beginning of the example.

We have a triangular determinant whose elements below the diagonal are equal to zero, and by property 7 it is equal to the product of the diagonal elements. In the end we got A = 26, therefore the inverse matrix exists.

A11 = 1*(3+1) = 4

A12 = -1*(9+2) = -11

A13 = 1*1 = 1

A21 = -1*(-6) = 6

A22 = 1*(3-0) = 3

A23 = -1*(1+4) = -5

A31 = 1*2 = 2

A32 = -1*(-1) = -1

A33 = 1+(1+6) = 7

3. The next step is to compile a matrix from the resulting additions:

5. Multiply this matrix by the inverse of the determinant, that is, by 1/26:

6. Now we just need to check:

During the test, we received an identity matrix, therefore, the solution was carried out absolutely correctly.

2 way to calculate the inverse matrix.

1. Elementary matrix transformation

2. Inverse matrix through an elementary converter.

Elementary matrix transformation includes:

1. Multiplying a string by a number that is not equal to zero.

2. Adding to any line another line multiplied by a number.

3. Swap the rows of the matrix.

4. Applying a chain of elementary transformations, we obtain another matrix.

A -1 = ?

1. (A|E) ~ (E|A -1 )

2.A -1 * A = E

Let's look at this using a practical example with real numbers.

Exercise: Find the inverse matrix.

Solution:

Let's check:

A little clarification on the solution:

First, we rearranged rows 1 and 2 of the matrix, then multiplied the first row by (-1).

After that, we multiplied the first row by (-2) and added it with the second row of the matrix. Then we multiplied line 2 by 1/4.

The final stage of transformation was multiplying the second line by 2 and adding it with the first. As a result, we have the identity matrix on the left, therefore, the inverse matrix is ​​the matrix on the right.

After checking, we were convinced that the decision was correct.

As you can see, calculating the inverse matrix is ​​very simple.

At the end of this lecture, I would also like to spend a little time on the properties of such a matrix.

Let it be given square matrix. You need to find the inverse matrix.

First way. Theorem 4.1 of the existence and uniqueness of an inverse matrix indicates one of the ways to find it.

1. Calculate the determinant of this matrix. If, then the inverse matrix does not exist (the matrix is ​​singular).

2. Construct a matrix from algebraic complements of matrix elements.

3. Transpose the matrix to obtain the adjoint matrix .

4. Find the inverse matrix (4.1) by dividing all elements of the adjoint matrix by the determinant

Second way. To find the inverse matrix, you can use elementary transformations.

1. Construct a block matrix by assigning to a given matrix an identity matrix of the same order.

2. Using elementary transformations performed on the rows of the matrix, bring its left block to its simplest form. In this case, the block matrix is ​​reduced to the form where is a square matrix obtained as a result of transformations from the identity matrix.

3. If , then the block is equal to the inverse of the matrix, i.e. If, then the matrix does not have an inverse.

In fact, with the help of elementary transformations of the rows of the matrix, it is possible to reduce its left block to a simplified form (see Fig. 1.5). In this case, the block matrix is ​​transformed to the form where is an elementary matrix satisfying the equality. If the matrix is ​​non-degenerate, then according to paragraph 2 of Remarks 3.3 its simplified form coincides with the identity matrix. Then from the equality it follows that. If the matrix is ​​singular, then its simplified form differs from the identity matrix, and the matrix does not have an inverse.

11. Matrix equations and their solution. Matrix form of recording SLAE. Matrix method(inverse matrix method) solution of SLAE and conditions of its applicability.

Matrix equations are equations of the form: A*X=C; X*A=C; A*X*B=C where matrix A,B,C are known, the matrix X is not known, if the matrices A and B are not singular, then the solutions to the original matrices will be written in the appropriate form: X = A -1 * C; X=C*A -1 ; X=A -1 *C*B -1 Matrix form of writing systems of linear algebraic equations. Several matrices can be associated with each SLAE; Moreover, the SLAE itself can be written in the form of a matrix equation. For SLAE (1), consider the following matrices:

Matrix A is called matrix of the system. The elements of this matrix represent the coefficients of a given SLAE.

The matrix A˜ is called extended matrix system. It is obtained by adding to the system matrix a column containing free terms b1,b2,...,bm. Usually this column is separated by a vertical line for clarity.

The column matrix B is called matrix of free members, and the column matrix X is matrix of unknowns.

Using the notation introduced above, SLAE (1) can be written in the form of a matrix equation: A⋅X=B.

Note

The matrices associated with the system can be written in various ways: everything depends on the order of the variables and equations of the SLAE under consideration. But in any case, the order of the unknowns in each equation of a given SLAE must be the same.

The matrix method is suitable for solving SLAEs in which the number of equations coincides with the number of unknown variables and the determinant of the main matrix of the system is different from zero. If the system contains more than three equations, then finding the inverse matrix requires significant computational effort, therefore, in this case it is advisable to use Gaussian method.

12. Homogeneous SLAEs, conditions for the existence of their non-zero solutions. Properties of partial solutions of homogeneous SLAEs.

A linear equation is called homogeneous if its free term is equal to zero, and inhomogeneous otherwise. A system consisting of homogeneous equations is called homogeneous and has the general form:

13 .The concept of linear independence and dependence of partial solutions of a homogeneous SLAE. Fundamental system of solutions (FSD) and its determination. Representation of the general solution of a homogeneous SLAE through the FSR.

Function system y 1 (x ), y 2 (x ), …, y n (x ) is called linearly dependent on the interval ( a , b ), if there is a set constant coefficients, not equal to zero at the same time, such that the linear combination of these functions is identically equal to zero on ( a , b ): For . If equality for is possible only for , the system of functions y 1 (x ), y 2 (x ), …, y n (x ) is called linearly independent on the interval ( a , b ). In other words, the functions y 1 (x ), y 2 (x ), …, y n (x ) linearly dependent on the interval ( a , b ), if there is an equal to zero on ( a , b ) their non-trivial linear combination. Functions y 1 (x ),y 2 (x ), …, y n (x ) linearly independent on the interval ( a , b ), if only their trivial linear combination is identically equal to zero on ( a , b ).

Fundamental decision system (FSR) A homogeneous SLAE is the basis of this system of columns.

The number of elements in the FSR is equal to the number of unknowns of the system minus the rank of the system matrix. Any solution to the original system is a linear combination FSR decisions.

Theorem

The general solution of a non-homogeneous SLAE is equal to the sum of the particular solution of a non-homogeneous SLAE and general solution corresponding homogeneous SLAE.

1 . If the columns are solutions to a homogeneous system of equations, then any linear combination of them is also a solution to the homogeneous system.

Indeed, from the equalities it follows that

those. a linear combination of solutions is a solution to a homogeneous system.

2. If the rank of the matrix of a homogeneous system is equal to , then the system has linearly independent solutions.

Indeed, using formulas (5.13) for the general solution of a homogeneous system, we find particular solutions, giving the free variables the following standard value sets (each time assuming that one of the free variables is equal to one and the rest are equal to zero):

which are linearly independent. In fact, if you create a matrix from these columns, then its last rows form the identity matrix. Consequently, the minor located in the last lines is not equal to zero (it is equal to one), i.e. is basic. Therefore, the rank of the matrix will be equal. This means that all columns of this matrix are linearly independent (see Theorem 3.4).

Any collection of linearly independent solutions of a homogeneous system is called fundamental system (set) of solutions .

14 Minor of the th order, basic minor, rank of the matrix. Calculating the rank of a matrix.

The order k minor of a matrix A is the determinant of some of its square submatrix of order k.

In a matrix A of dimensions m x n, a minor of order r is called basic if it is nonzero, and all minors of higher order, if they exist, are equal to zero.

The columns and rows of the matrix A, at the intersection of which there is a basis minor, are called the basis columns and rows of A.

Theorem 1. (On the rank of the matrix). For any matrix, the minor rank is equal to the row rank and equal to the column rank.

Theorem 2. (On the basis minor). Each matrix column is decomposed into a linear combination of its basis columns.

The rank of a matrix (or minor rank) is the order of the basis minor or, in other words, the largest order for which non-zero minors exist. The rank of a zero matrix is ​​considered 0 by definition.

Let us note two obvious properties of minor rank.

1) The rank of a matrix does not change during transposition, since when a matrix is ​​transposed, all its submatrices are transposed and the minors do not change.

2) If A’ is a submatrix of matrix A, then the rank of A’ does not exceed the rank of A, since a non-zero minor included in A’ is also included in A.

15. The concept of a -dimensional arithmetic vector. Equality of vectors. Operations on vectors (addition, subtraction, multiplication by a number, multiplication by a matrix). Linear combination of vectors.

Ordered collection n valid or complex numbers called n-dimensional vector. The numbers are called vector coordinates.

Two (non-zero) vectors a And b are equal if they are equally directed and have the same module. All zero vectors are considered equal. In all other cases, the vectors are not equal.

Vector addition. There are two ways to add vectors: 1. Parallelogram rule. To add the vectors and, we place the origins of both at the same point. We build up to a parallelogram and from the same point we draw a diagonal of the parallelogram. This will be the sum of the vectors.

2. The second method of adding vectors is the triangle rule. Let's take the same vectors and . We will add the beginning of the second to the end of the first vector. Now let's connect the beginning of the first and the end of the second. This is the sum of the vectors and . Using the same rule, you can add several vectors. We arrange them one after another, and then connect the beginning of the first to the end of the last.

Subtraction of vectors. The vector is directed opposite to the vector. The lengths of the vectors are equal. Now it’s clear what vector subtraction is. The vector difference and is the sum of the vector and the vector .

Multiplying a vector by a number

Multiplying a vector by a number k produces a vector whose length is k times the length. It is codirectional with the vector if k is greater than zero, and oppositely directed if k is less than zero.

The scalar product of vectors is the product of the lengths of the vectors and the cosine of the angle between them. If the vectors are perpendicular, their scalar product is zero. And like this scalar product is expressed through the coordinates of the vectors and .

Linear combination of vectors

Linear combination of vectors called a vector

Where - linear combination coefficients. If a combination is called trivial if it is non-trivial.

16 .Scalar product of arithmetic vectors. Vector length and angle between vectors. The concept of vector orthogonality.

The scalar product of vectors a and b is the number

The scalar product is used to calculate: 1) finding the angle between them; 2) finding the projection of vectors; 3) calculating the length of a vector; 4) the conditions of perpendicularity of vectors.

The length of the segment AB is called the distance between points A and B. The angle between vectors A and B is called angle α = (a, b), 0≤ α ≤P. By which you need to rotate 1 vector so that its direction coincides with another vector. Provided that their origins coincide.

An ortom a is a vector a having unit length and direction a.

17. System of vectors and its linear combination. The concept of linear dependence and independence of a system of vectors. Theorem on necessary and sufficient conditions for the linear dependence of a system of vectors.

A system of vectors a1,a2,...,an is called linearly dependent if there are numbers λ1,λ2,...,λn such that at least one of them is nonzero and λ1a1+λ2a2+...+λnan=0. Otherwise, the system is called linearly independent.

Two vectors a1 and a2 are called collinear if their directions are the same or opposite.

Three vectors a1, a2 and a3 are called coplanar if they are parallel to some plane.

Geometric criteria for linear dependence:

a) system (a1,a2) is linearly dependent if and only if the vectors a1 and a2 are collinear.

b) system (a1,a2,a3) is linearly dependent if and only if the vectors a1,a2 and a3 are coplanar.

theorem. (Necessary and sufficient condition for linear dependence systems vectors.)

Vector system vector space is linear dependent if and only if one of the vectors of the system is linearly expressed in terms of the others vector this system.

Corollary 1. A system of vectors in a vector space is linearly independent if and only if none of the vectors of the system is linearly expressed in terms of other vectors of this system.2. A system of vectors containing a zero vector or two equal vectors is linearly dependent.

For any non-singular matrix A there is a unique matrix A -1 such that

A*A -1 =A -1 *A = E,

where E is the identity matrix of the same orders as A. The matrix A -1 is called the inverse of matrix A.

In case someone forgot, in the identity matrix, except for the diagonal filled with ones, all other positions are filled with zeros, an example of an identity matrix:

Finding the inverse matrix using the adjoint matrix method

The inverse matrix is ​​defined by the formula:

where A ij - elements a ij.

Those. To calculate the inverse matrix, you need to calculate the determinant of this matrix. Then find the algebraic complements for all its elements and compose a new matrix from them. Next you need to transport this matrix. And divide each element of the new matrix by the determinant of the original matrix.

Let's look at a few examples.

Find A -1 for a matrix

Solution. Let's find A -1 using the adjoint matrix method. We have det A = 2. Let us find the algebraic complements of the elements of matrix A. In this case, the algebraic complements of the matrix elements will be the corresponding elements of the matrix itself, taken with a sign in accordance with the formula

We have A 11 = 3, A 12 = -4, A 21 = -1, A 22 = 2. We form the adjoint matrix

We transport the matrix A*:

We find the inverse matrix using the formula:

We get:

Using the adjoint matrix method, find A -1 if

Solution. First of all, we calculate the definition of this matrix to verify the existence of the inverse matrix. We have

Here we added to the elements of the second row the elements of the third row, previously multiplied by (-1), and then expanded the determinant for the second row. Since the definition of this matrix is ​​nonzero, its inverse matrix exists. To construct the adjoint matrix, we find the algebraic complements of the elements of this matrix. We have

According to the formula

transport matrix A*:

Then according to the formula

Finding the inverse matrix using the method of elementary transformations

In addition to the method of finding the inverse matrix, which follows from the formula (the adjoint matrix method), there is a method for finding the inverse matrix, called the method of elementary transformations.

Elementary matrix transformations

The following transformations are called elementary matrix transformations:

1) rearrangement of rows (columns);

2) multiplying a row (column) by a number other than zero;

3) adding to the elements of a row (column) the corresponding elements of another row (column), previously multiplied by a certain number.

To find the matrix A -1, we construct a rectangular matrix B = (A|E) of orders (n; 2n), assigning to matrix A on the right the identity matrix E through a dividing line:

Let's look at an example.

Using the method of elementary transformations, find A -1 if

Solution. We form matrix B:

Let us denote the rows of matrix B by α 1, α 2, α 3. Let us perform the following transformations on the rows of matrix B.

Similar to the inverse in many properties.

Encyclopedic YouTube

    1 / 5

    ✪ How to find the inverse of a matrix - bezbotvy

    ✪ Inverse matrix (2 ways to find)

    ✪ Inverse matrix #1

    ✪ 2015-01-28. Inverse 3x3 matrix

    ✪ 2015-01-27. Inverse matrix 2x2

    Subtitles

Properties of an inverse matrix

  • det A − 1 = 1 det A (\displaystyle \det A^(-1)=(\frac (1)(\det A))), Where det (\displaystyle \\det ) denotes the determinant.
  • (A B) − 1 = B − 1 A − 1 (\displaystyle \ (AB)^(-1)=B^(-1)A^(-1)) for two square invertible matrices A (\displaystyle A) And B (\displaystyle B).
  • (A T) − 1 = (A − 1) T (\displaystyle \ (A^(T))^(-1)=(A^(-1))^(T)), Where (. . .) T (\displaystyle (...)^(T)) denotes a transposed matrix.
  • (k A) − 1 = k − 1 A − 1 (\displaystyle \ (kA)^(-1)=k^(-1)A^(-1)) for any coefficient k ≠ 0 (\displaystyle k\not =0).
  • E − 1 = E (\displaystyle \E^(-1)=E).
  • If it is necessary to solve a system of linear equations, (b is a non-zero vector) where x (\displaystyle x) is the desired vector, and if A − 1 (\displaystyle A^(-1)) exists, then x = A − 1 b (\displaystyle x=A^(-1)b). Otherwise, either the dimension of the solution space is greater than zero, or there are no solutions at all.

Methods for finding the inverse matrix

If the matrix is ​​invertible, then to find the inverse matrix you can use one of the following methods:

Exact (direct) methods

Gauss-Jordan method

Let's take two matrices: the A and single E. Let's present the matrix A to the identity matrix using the Gauss-Jordan method, applying transformations along the rows (you can also apply transformations along the columns, but not intermixed). After applying each operation to the first matrix, apply the same operation to the second. When the reduction of the first matrix to unit form is completed, the second matrix will be equal to A−1.

When using the Gaussian method, the first matrix will be multiplied on the left by one of the elementary matrices Λ i (\displaystyle \Lambda _(i))(transvection or diagonal matrix with units on the main diagonal, except for one position):

Λ 1 ⋅ ⋯ ⋅ Λ n ⋅ A = Λ A = E ⇒ Λ = A − 1 (\displaystyle \Lambda _(1)\cdot \dots \cdot \Lambda _(n)\cdot A=\Lambda A=E \Rightarrow \Lambda =A^(-1)). Λ m = [ 1 … 0 − a 1 m / a m m 0 … 0 … 0 … 1 − a m − 1 m / a m m 0 … 0 0 … 0 1 / a m m 0 … 0 0 … 0 − a m + 1 m / a m m 1 … 0 … 0 … 0 − a n m / a m m 0 … 1 ] (\displaystyle \Lambda _(m)=(\begin(bmatrix)1&\dots &0&-a_(1m)/a_(mm)&0&\dots &0\\ &&&\dots &&&\\0&\dots &1&-a_(m-1m)/a_(mm)&0&\dots &0\\0&\dots &0&1/a_(mm)&0&\dots &0\\0&\dots &0&-a_( m+1m)/a_(mm)&1&\dots &0\\&&&\dots &&&\\0&\dots &0&-a_(nm)/a_(mm)&0&\dots &1\end(bmatrix))).

The second matrix after applying all operations will be equal to Λ (\displaystyle \Lambda), that is, it will be the desired one. Algorithm complexity - O (n 3) (\displaystyle O(n^(3))).

Using the algebraic complement matrix

Matrix inverse of matrix A (\displaystyle A), can be represented in the form

A − 1 = adj (A) det (A) (\displaystyle (A)^(-1)=(((\mbox(adj))(A)) \over (\det(A))))

Where adj (A) (\displaystyle (\mbox(adj))(A))- adjoint matrix;

The complexity of the algorithm depends on the complexity of the algorithm for calculating the determinant O det and is equal to O(n²)·O det.

Using LU/LUP Decomposition

Matrix equation A X = I n (\displaystyle AX=I_(n)) for the inverse matrix X (\displaystyle X) can be considered as a collection n (\displaystyle n) systems of the form A x = b (\displaystyle Ax=b). Let's denote i (\displaystyle i) th column of the matrix X (\displaystyle X) through X i (\displaystyle X_(i)); Then A X i = e i (\displaystyle AX_(i)=e_(i)), i = 1 , … , n (\displaystyle i=1,\ldots ,n),because the i (\displaystyle i) th column of the matrix I n (\displaystyle I_(n)) is the unit vector e i (\displaystyle e_(i)). in other words, finding the inverse matrix comes down to solving n equations with the same matrix and different right-hand sides. After performing the LUP decomposition (O(n³) time), solving each of the n equations takes O(n²) time, so this part of the work also requires O(n³) time.

If the matrix A is non-singular, then the LUP decomposition can be calculated for it P A = L U (\displaystyle PA=LU). Let P A = B (\displaystyle PA=B), B − 1 = D (\displaystyle B^(-1)=D). Then from the properties of the inverse matrix we can write: D = U − 1 L − 1 (\displaystyle D=U^(-1)L^(-1)). If you multiply this equality by U and L, you can get two equalities of the form U D = L − 1 (\displaystyle UD=L^(-1)) And D L = U − 1 (\displaystyle DL=U^(-1)). The first of these equalities represents a system of n² linear equations For n (n + 1) 2 (\displaystyle (\frac (n(n+1))(2))) from which the right-hand sides are known (from the properties of triangular matrices). The second also represents a system of n² linear equations for n (n − 1) 2 (\displaystyle (\frac (n(n-1))(2))) from which the right-hand sides are known (also from the properties of triangular matrices). Together they represent a system of n² equalities. Using these equalities, we can recursively determine all n² elements of the matrix D. Then from the equality (PA) −1 = A −1 P −1 = B −1 = D. we obtain the equality A − 1 = D P (\displaystyle A^(-1)=DP).

In the case of using the LU decomposition, no permutation of the columns of the matrix D is required, but the solution may diverge even if the matrix A is nonsingular.

The complexity of the algorithm is O(n³).

Iterative methods

Schultz methods

( Ψ k = E − A U k , U k + 1 = U k ∑ i = 0 n Ψ k i (\displaystyle (\begin(cases)\Psi _(k)=E-AU_(k),\\U_( k+1)=U_(k)\sum _(i=0)^(n)\Psi _(k)^(i)\end(cases)))

Error estimate

Selecting an Initial Approximation

The problem of choosing an initial approximation in the iterative matrix inversion processes considered here does not allow us to treat them as independent universal methods that compete with direct inversion methods based, for example, on the LU decomposition of matrices. There are some recommendations for choosing U 0 (\displaystyle U_(0)), ensuring the fulfillment of the condition ρ (Ψ 0) < 1 {\displaystyle \rho (\Psi _{0})<1} (spectral radius of the matrix is ​​less than unity), which is necessary and sufficient for the convergence of the process. However, in this case, firstly, it is required to know from above the estimate for the spectrum of the invertible matrix A or the matrix A A T (\displaystyle AA^(T))(namely, if A is a symmetric positive definite matrix and ρ (A) ≤ β (\displaystyle \rho (A)\leq \beta ), then you can take U 0 = α E (\displaystyle U_(0)=(\alpha )E), Where ; if A is an arbitrary non-singular matrix and ρ (A A T) ≤ β (\displaystyle \rho (AA^(T))\leq \beta ), then they believe U 0 = α A T (\displaystyle U_(0)=(\alpha )A^(T)), where also α ∈ (0 , 2 β) (\displaystyle \alpha \in \left(0,(\frac (2)(\beta ))\right)); You can, of course, simplify the situation and take advantage of the fact that ρ (A A T) ≤ k A A T k (\displaystyle \rho (AA^(T))\leq (\mathcal (k))AA^(T)(\mathcal (k))), put U 0 = A T ‖ A A T ‖ (\displaystyle U_(0)=(\frac (A^(T))(\|AA^(T)\|)))). Secondly, when specifying the initial matrix in this way, there is no guarantee that ‖ Ψ 0 ‖ (\displaystyle \|\Psi _(0)\|) will be small (perhaps it will even turn out to be ‖ Ψ 0 ‖ > 1 (\displaystyle \|\Psi _(0)\|>1)), and a high order of convergence rate will not be revealed immediately.

Examples

Matrix 2x2

A − 1 = [ a b c d ] − 1 = 1 det (A) [ d − b − c a ] = 1 a d − b c [ d − b − c a ] . (\displaystyle \mathbf (A) ^(-1)=(\begin(bmatrix)a&b\\c&d\\\end(bmatrix))^(-1)=(\frac (1)(\det(\mathbf (A))))(\begin(bmatrix)\,\,\,d&\!\!-b\\-c&\,a\\\end(bmatrix))=(\frac (1)(ad- bc))(\begin(bmatrix)\,\,\,d&\!\!-b\\-c&\,a\\\end(bmatrix)).)

Inversion of a 2x2 matrix is ​​possible only under the condition that a d − b c = det A ≠ 0 (\displaystyle ad-bc=\det A\neq 0).