## Browse Course Material

Course info.

- Prof. Gilbert Strang

## Departments

- Mathematics

## As Taught In

- Linear Algebra

## Learning Resource Types

Problem sets with solutions.

## Pre-Algebra

- Linear Algebra

## Integral Calculus

- Probability
- Research Methods
- OpenStax Calculus Vol. 1
- Teacher Account
- Practice and Recap
- Active Learning
- Your own Personal Course
- Edit & Create
- Computer Algebra System
- Real-time Learning Insights
- Knowledge Graph
- LMS Integrations
- Integrations
- Book Live Demo
- Customer Stories
- Help Center
- Privacy & Security
- Terms of Serivce
- Accessibility

## Linear Algebra Exercises

Do you want to engage your students more when teaching linear algebra grasple offers a selection of online exercises and openly licensed material to enhance your education..

## Open Exercises

Linear systems i.

- 9 Linear System Definition and Properties
- 26 Solving Linear Systems (one solution)
- 14 Solving Linear Systems (general)

## Linear Systems II

- 16 Vector Definition and Arithmetic
- 5 Vector Equations
- 15 Linear Combinations and Span
- 17 Matrix Equations
- 18 Solution Set Structure
- 35 Linear Independence

## Linear Transformation

- 21 Linear Transformations Definition and Properties
- 13 Standard Matrix
- 3 Linear Transformations One-to-One and Onto

## Matrix Algebra

- 11 Addition, Scalar Multiplication and Transposition
- 31 Matrix Operations
- 3 Elementary Matrices
- 14 Inverse matrices (Theory)
- 20 Inverse Matrices (Computing the Inverse)
- 4 Partitioned Matrices
- 6 LU-Factorization
- 11 Subspaces Definition and Properties
- 3 Basis (Theory)
- 9 Finding a Basis
- 6 Coordinates
- 8 Dimension and Rank

## Determinants

- 14 Cofactor-Expansion
- 9 Determinants Using Row and Column Operations
- 15 Determinants: Rules of Calculation
- 5 Applications of Determinants - Area and Volume
- 8 Applications of Determinants - Cramer's Rule

## Eigenvectors

- 5 Markov Chains
- 13 Definition Eigenvector and Eigenvalue
- 6 Eigenspaces
- 8 The Characteristic Equation
- 4 Similarity
- 13 Diagonalization and Diagonalizability
- 13 Complex Eigenvalues
- 9 Systems of Linear Differential Equations

## Orthogonality

- 10 Inner Product
- 4 Orthogonal Projections on a Line
- 11 Orthogonal and Orthonormal Sets
- 4 Orthogonal Projections
- 4 QR Factorization
- 5 The Gram-Schmidt Process
- 2 Least-Squares Method
- 5 Regression

## Symmetric Matrices

- 15 Symmetric Matrices: Definitions and Properties
- 6 Orthogonal Diagonalization
- 15 Quadratic Forms
- 6 Constrained Optimization
- 8 Singular Value Decomposition

Indicates whether a lesson/explanation is available per subject 10 Indicates if and how many exercises are currently available per subject Content has an open Creative Commons license Content will be released with an open Creative Commons license in the near future

## Looking for a specific field in mathematics?

see content

## About Grasple

We make Math and Statistics Education more engaging by offering an online practice platform. With this, educators can create interactive exercises, and students can practice these exercises while receiving immediate feedback on their efforts.

Curious to learn more? Create a free Teacher Account and start creating your own interactive exercises today.

## Want to stay up to date on newly released materials and other community activities?

We e-mail once a month. We promise we value your inbox, so no spam.

## Linear Algebra Questions

Linear algebra questions with solutions are provided here for practice and to understand what is linear algebra and its application to solving problems. Linear algebra is a branch of mathematics that deals with vectors, vector spaces, and linear functions which operate on vectors and follow vector addition.

Following are the main topics under linear algebra:

- Matrices and determinants
- Vector Spaces
- System of linear equations
- Linear transformations
- Inner product spaces
- Diagonalizations and quadratic forms

We shall practice a few problems based on these topics.

Learn more about linear algebra and its applications .

## Linear Algebra Questions with Solutions

Let us solve a few questions based on linear algebra.

Question 1:

Show that the matrix A is unitary matrix

\(\begin{array}{l}A=\frac{1}{5}\begin{bmatrix}-1+2i & -4-2i \\ 2-4i& -2-i \\\end{bmatrix}\end{array} \)

A matrix is said to be unitary if and only if AA* = A*A = I, where A* is the transpose of the conjugate of A.

Transpose of A

\(\begin{array}{l}A^{T}=\frac{1}{5}\begin{bmatrix}-1+2i & 2-4i \\ -4-2i& -2-i \\\end{bmatrix}\end{array} \)

\(\begin{array}{l}A^{*}=\overline{A^{T}}=\frac{1}{5}\begin{bmatrix}-1-2i & 2+4i \\ -4+2i& -2+i \\\end{bmatrix}\end{array} \)

\(\begin{array}{l}AA^{*}=\frac{1}{25}\begin{bmatrix}-1+2i & -4-2i \\2-4i & -2-i \\\end{bmatrix}\begin{bmatrix}-1-2i & 2+4i \\ -4+2i& -2+i \\\end{bmatrix}\end{array} \)

\(\begin{array}{l}=\frac{1}{25}\begin{bmatrix}1+4+16+4 & 0\\0 & 4+16+4+1 \\\end{bmatrix}\end{array} \)

\(\begin{array}{l}\therefore AA^{*}=\begin{bmatrix}1 & 0 \\0 & 1 \\\end{bmatrix}=I\end{array} \)

Similarly, we can show that A*A = I

Hence, A is a unitary matrix.

Also refer: Types of Matrices

Question 2:

Find the rank and the nullity of the following matrix:

\(\begin{array}{l}\begin{bmatrix}1 & -2 & -1 & 4 \\2 & -4 & 3 & 5 \\-1 & 2 & 6 & -7 \\\end{bmatrix}\end{array} \)

To find the rank and nullity of the given matrix, we transform the given matrix into a row-reduced echelon form, by performing elementary transformations.

\(\begin{array}{l}A=\begin{bmatrix}1 & -2 & -1 & 4 \\2 & -4 & 3 & 5 \\-1 & 2 & 6 & -7 \\\end{bmatrix}\end{array} \)

Applying R 2 → R 2 – 2R 1 and R 3 → R 3 + R 1

\(\begin{array}{l}A~\begin{bmatrix}1 & -2 & -1 & 4 \\0 & 0 & 5 & -3 \\0 & 0 & 5 & -3 \\\end{bmatrix}\end{array} \)

Applying C 2 → C 2 + 2C 1 , C 3 → C 3 + C 1 and C 4 → C 4 – 4C 1

\(\begin{array}{l}A~\begin{bmatrix}1 & 0 & 0 & 0 \\0 & 0 & 5 & -3 \\0 & 0 & 5 & -3 \\\end{bmatrix}\end{array} \)

Applying R 3 → R 3 – R 2

\(\begin{array}{l}A~\begin{bmatrix}1 & 0 & 0 & 0 \\0 & 0 & 5 & -3 \\0 & 0 & 0 & 0 \\\end{bmatrix}\end{array} \)

Applying R 2 → (⅕)R 2

\(\begin{array}{l}A~\begin{bmatrix}1 & 0 & 0 & 0 \\0 & 0 & 1 & -3/5 \\0 & 0 & 0 & 0 \\\end{bmatrix}\end{array} \)

Applying C 4 → C 4 + (⅗)C 3

\(\begin{array}{l}A~\begin{bmatrix}1 & 0 & 0 & 0 \\0 & 0 & 1 & 0 \\0 & 0 & 0 & 0 \\\end{bmatrix}\end{array} \)

∴ number of non-zero rows of the row-reduced echelon form of A = rank of A = 2

number of zero rows of the row-reduced echelon form of A = nullity of A = 2

Learn more about rank and nullity .

Question 3:

Solve the following system of linear equations:

x + y + z = 6

x + 2y + 3z = 14

x + 4y + 7z = 30

The given linear equations can be written in the form of a matrix equation AX = B, where

\(\begin{array}{l}A=\begin{bmatrix}1 & 1 & 1 \\1 & 2 & 3 \\1 & 4 & 7 \\\end{bmatrix}, X = \begin{bmatrix}x \\y \\z\end{bmatrix}\:\:and\:\:B=\begin{bmatrix}6 \\14 \\30\end{bmatrix}\end{array} \)

The augmented matrix [A| B] is-

\(\begin{array}{l}[A|B]=\begin{bmatrix}1 & 1 & 1 &|6 \\1 & 2 & 3&|14 \\1 & 4 & 7 &|30 \\\end{bmatrix}\end{array} \)

We reduce the given matrix to row echelon form by applying elementary row transformations

Applying R 2 → R 2 – R 1 , R 3 → R 3 – R 1

\(\begin{array}{l}[A|B]\sim \begin{bmatrix}1 & 1 & 1 &|6 \\0 & 1& 2&|8 \\0 & 3 & 6&|24 \\\end{bmatrix}\end{array} \)

Applying R 3 → R 3 – 3R 2

\(\begin{array}{l}[A|B]\sim \begin{bmatrix}1 & 1 & 1 &|6 \\0 & 1& 2&|8 \\0 & 0 & 0&|0 \\\end{bmatrix}\end{array} \)

Since, Rank of A = Rank of [A : B] = 2 < number of unknowns

∴ the given system of linear equations has an infinite number of solutions.

Thus, we get from the row reduced echelon form matrix

x + y + z = 6 ….(i)

⇒ y = 8 – 2z putting this value of y in (i), we get

x + 8 – 2z + z = 6

⇒ x – z = –2

⇒ x = z – 2

Now taking different values of z will give different values of the given system of equations.

- Transpose of Matrix
- Determinant of a Matrix
- Matrix Multiplication
- Matrix Operations
- Special Matrices

Question 4:

Show that the set V = {(x, y) ∈ R 2 | xy ≥ 0} is not a vector space of R 2 .

For V to be a vector space, it is required that V must be closed under addition, that is for any x and y in V, x + y ∈ V

Let ( – 1, 0) and (0, 1) ∈ V

Now, ( – 1, 0) + (0, 1) = ( –1 + 0, 0 + 1) = ( –1, 1)

But, –1 × 1 = –1 < 0 ⇒ ( –1, 1) ∉ V.

∴ V is not a vector space in R 2 .

Question 5:

Find the eigenvalues of

\(\begin{array}{l}A= \begin{bmatrix}0 & 1 & 0 \\0 & 0 & 1 \\4 & -17 & 8 \\\end{bmatrix}\end{array} \)

The characteristic polynomial is given by

\(\begin{array}{l}det(A-\lambda I) = det\begin{bmatrix}-\lambda & -1&0 \\ 0& -\lambda & 1 \\4 & -17 & 8-\lambda \\\end{bmatrix}=\lambda ^{3}-8\lambda^{2}+17\lambda-4\end{array} \)

Eigenvalues of A are the roots of the above cubic equation,

𝜆 3 – 8𝜆 2 + 17𝜆 – 4 = 0

⇒ (𝜆 – 4)(𝜆 2 – 4𝜆 + 1) = 0

Solving this we get,

𝜆 = 4, 𝜆 = 2 ±√3

These are the eigenvalues of A.

Also check: Eigenvalues and Eigenvectors

Determine whether the following vector is linearly dependent or linearly independent: (1, 2, –3, 1), (3, 7, 1, –2), (1, 3, 7, –4).

The vectors could form the column vectors of matrix A. We shall find the rank of A by reducing it to row echelon form.

\(\begin{array}{l}A=\begin{bmatrix}1 & 3 & 1 \\2 & 7 & 3 \\-3 & 1 & 7 \\1 & -2 & -4 \\\end{bmatrix}\end{array} \)

Applying R 2 → R 2 – 2R 1 , R 3 → R 3 + 3R 1 , and R 4 → R 4 – R 1

\(\begin{array}{l}A\sim \begin{bmatrix}1 & 3 & 1 \\0 & 1 & 1 \\0 & 10 & 10 \\0 & -5 & -5 \\\end{bmatrix}\end{array} \)

Applying R 3 → R 3 – 10R 2 , R 4 → R 4 + 5R 2

\(\begin{array}{l}A\sim \begin{bmatrix}1 & 3 & 1 \\0 & 1 & 1 \\0 & 0 & 0 \\0 & 0 & 0 \\\end{bmatrix}\end{array} \)

Clearly, rank of A = 2 < number of column vectors. So, the given vectors are linearly dependent.

Question 6:

Verify whether the polynomials x 3 – 5x 2 – 2x + 3, x 3 – 1, x 3 + 2x + 4 are linearly independent.

We may construct a matrix with coefficients of x 3 , x 2 , x, and constant terms.

\(\begin{array}{l}A=\begin{bmatrix}1 & 1 & 1 \\-5 & 0 & 0 \\-2 & 0 & 2 \\3 & -1 & 4 \\\end{bmatrix}\end{array} \)

To find the rank of A let us reduce it to row echelon form by applying elementary transformations

Applying R 2 → R 2 + 5R 1 , R 3 → R 3 + 2R 1 , and R 4 → R 4 – 3R 1

\(\begin{array}{l}A\sim \begin{bmatrix}1 & 1 & 1 \\0 & 5 & 5 \\0 & 2 & 4 \\0 & -4 & 1 \\\end{bmatrix}\end{array} \)

Applying R 2 → (⅕) R 2

\(\begin{array}{l}A\sim \begin{bmatrix}1 & 1 & 1 \\0 & 1 & 1 \\0 & 2 & 4 \\0 & -4 & 1 \\\end{bmatrix}\end{array} \)

Applying R 3 → R 3 – 2R 2 , R 4 → R 4 + 4R 2

\(\begin{array}{l}A\sim \begin{bmatrix}1 & 1 & 1 \\0 & 1 & 1 \\0 & 0 & 2 \\0 & 0 & 5 \\\end{bmatrix}\end{array} \)

Applying R 4 → R 4 – (5/2)R 3

\(\begin{array}{l}A\sim \begin{bmatrix}1 & 1 & 1 \\0 & 1 & 1 \\0 & 0 & 2 \\0 & 0 & 0 \\\end{bmatrix}\end{array} \)

∴ rank of A = 3 = number of column vectors. So the given vectors are linearly independent.

Question 7:

Show that the following matrix is diagonalizable:

\(\begin{array}{l}A=\begin{bmatrix}1 & 0 & -1 \\1 & 2 & 1 \\2 & 2 & 3 \\\end{bmatrix}\end{array} \)

First, we shall find the eigenvalues of A. The characteristic equation of A is given by:

\(\begin{array}{l}|A-\lambda I|=\begin{vmatrix}1-\lambda & 0 & -1 \\1 & 2-\lambda & 1 \\2 & 2 & 3-\lambda \\\end{vmatrix}=0\end{array} \)

⇒ (1 – 𝜆)(2 – 𝜆)(3 – 𝜆) = 0

⇒ 𝜆 = 1, 2, 3.

The eigenvector corresponding to 𝜆 1 = 1 is the non-zero solution of the following matrix equation:

(A – 1I)X = 0

\(\begin{array}{l}\Rightarrow \begin{bmatrix}0 & 0 & -1 \\1 & 1 & 1 \\2 & 2 & 2 \\\end{bmatrix}\begin{bmatrix}x \\y \\z\end{bmatrix}=\begin{bmatrix}0 \\0 \\0\end{bmatrix}\end{array} \)

Applying elementary transformation R 3 → R 3 – 2R 2 and R 2 → R 2 + R 1 , we get

\(\begin{array}{l}\Rightarrow \begin{bmatrix}0 & 0 & -1 \\1 & 1 & 0 \\0 & 0 & 0 \\\end{bmatrix}\begin{bmatrix}x \\y \\z\end{bmatrix}=\begin{bmatrix}0 \\0 \\0\end{bmatrix}\end{array} \)

\(\begin{array}{l}\Rightarrow \begin{bmatrix}-z \\x+y \\0\end{bmatrix}=\begin{bmatrix}0 \\0 \\0\end{bmatrix}\end{array} \)

⇒ z = 0, x + y = 0

If we take x = 1 ⇒ y = –1

Hence, the corresponding eigen-vector X 1 = [1 –1 0] T .

Similarly, the eigenvector corresponding to 𝜆 = 2 is given by:

\(\begin{array}{l}\Rightarrow \begin{bmatrix}-1 & 0 & -1 \\1 & 0 & 1 \\2 & 0 & 1 \\\end{bmatrix}\begin{bmatrix}x \\y \\z\end{bmatrix}=\begin{bmatrix}0 \\0 \\0\end{bmatrix}\end{array} \)

Applying elementary transformation R 3 → R 3 – R 2 and R 2 → R 2 + R 1 , we get

\(\begin{array}{l}\Rightarrow \begin{bmatrix}-1 & 0 & -1 \\0 & 0 & 0 \\1 & 2 & 0 \\\end{bmatrix}\begin{bmatrix}x \\y \\z\end{bmatrix}=\begin{bmatrix}0 \\0 \\0\end{bmatrix}\end{array} \)

\(\begin{array}{l}\Rightarrow \begin{bmatrix}-x-z \\0 \\x+2y\end{bmatrix}=\begin{bmatrix}0 \\0 \\0\end{bmatrix}\end{array} \)

⇒ x + z = 0, x + 2y = 0

⇒ x = –2, y = 1 and z = 2 {taking y = 1}

Hence, the corresponding eigen-vector X 2 = [ –2 1 2] T .

Finally, the eigenvector corresponding to 𝜆 = 3 is the non-zero solution of the following matrix equation:

\(\begin{array}{l}\Rightarrow \begin{bmatrix}-2 & 0 & -1 \\1 & -1 & 1 \\2 & 2 & 0 \\\end{bmatrix}\begin{bmatrix}x \\y \\z\end{bmatrix}=\begin{bmatrix}0 \\0 \\0\end{bmatrix}\end{array} \)

Applying elementary transformation R 2 → R 2 + R 1 and R 3 → R 3 + 2R 2 , we get

\(\begin{array}{l}\Rightarrow \begin{bmatrix}-2 & 0 & -1 \\-1 & -1 & 0 \\0 & 0 & 0 \\\end{bmatrix}\begin{bmatrix}x \\y \\z\end{bmatrix}=\begin{bmatrix}0 \\0 \\0\end{bmatrix}\end{array} \)

⇒ 2x + z = 0, x + y = 0

On taking x = 1, we get x = 1, y = –1 and z = –2

Hence, the corresponding eigen-vector X 3 = [ 1 –1 –2] T .

Let us construct a matrix with these eigenvectors as its column vectors, we get

\(\begin{array}{l}P=\begin{bmatrix}1 & -2 & 1 \\-1 & 1 & -1 \\0 & 2 & -2 \\\end{bmatrix}\end{array} \)

Inverse of P is

\(\begin{array}{l}P^{-1}=\frac{1}{2}\begin{bmatrix}0 & -2 & 1 \\-2 & -2 & 0 \\-2 & -2 & -1 \\\end{bmatrix}\end{array} \)

\(\begin{array}{l}P^{-1}AP=\frac{1}{2}\begin{bmatrix}0 & -2 & 1 \\-2 & -2 & 0 \\-2 & -2 & -1 \\\end{bmatrix}\begin{bmatrix}1 & 0 & -1 \\1 & 2 & 1 \\2 & 2 & 3 \\\end{bmatrix}\begin{bmatrix}1 & -2 & 1 \\-1 & 1 & -1 \\0 & 2 & -2 \\\end{bmatrix}\end{array} \)

\(\begin{array}{l}=\begin{bmatrix}1 & 0 & 0 \\0 & 2 & 0 \\0 & 0 & 3 \\\end{bmatrix}\end{array} \)

= diag(1, 2, 3)

Thus, A is diagonalizable.

Refer: Diagonalization

Question 8:

Show that the transformation T: V 2 ( R ) → V 2 ( R ) defined by T(a, b) = (a + b, a) ∀ a, b ∈ R is a linear transformation.

To show that T is a linear transformation, we need to prove that,

For any x, y ∈ V 2 ( R )

T( x + y ) = T( x ) + T( y ) and T(a x ) = aT( x ) where a is a scalar in field.

Let (x 1 , y 1 ) and (x 2 , y 2 ) are arbitrary elements of V 2 ( R )

T[(x 1 , y 1 ) + (x 2 , y 2 )] = T[(x 1 + x 2 , y 1 + y 2 )] = (x 1 + x 2 + y 1 + y 2 , x 1 + x 2 ) …..(i)

T(x 1 , y 1 ) + T(x 2 , y 2 ) = (x 1 + y 1 , x 1 ) + (x 2 + y 2 , x 2 ) = (x 1 + x 2 + y 1 + y 2 , x 1 + x 2 ) …..(ii)

From (i) and (ii), we get T[(x 1 , y 1 ) + (x 2 , y 2 )] = T(x 1 , y 1 ) + T(x 2 , y 2 )

Now, T[a(x 1 , y 1 )] = T(ax 1 , ay 2 ) = (ax 1 + ay 1 , ax 1 ) = a(x 1 + y 1 , x 1 ) = aT(x 1 , y 1 ).

∴ T is a linear transformation.

Question 9:

Show that the given subset of vectors of R 3 forms a basis for R 3 .

{(1, 2, 1), (2, 1, 0), (1, –1, 2)}

S = {(1, 2, 1), (2, 1, 0), (1, –1, 2)}

We know that any set of n linearly independent vectors forms the basis of n-dimensional vector space.

Now, dim R 3 = 3, we just need to prove that vectors in S are linearly independent.

Let \(\begin{array}{l}A=\begin{bmatrix}1 & 2 & 1 \\2 & 1 & -1 \\1 & 0 & 2 \\\end{bmatrix}\end{array} \)

We reduce this matrix to row echelon form to check the rank of A.

Applying R 2 → R 2 + (–2)R 1 and R 3 → R 3 + ( –1)R 1 , we get

\(\begin{array}{l}A\sim\begin{bmatrix}1 & 2 & 1 \\0 & -3 & -3 \\0 & -2 & 1 \\\end{bmatrix}\end{array} \)

Applying R 2 → ( –⅓)R 2 and R 3 → R 3 + 2R 2 , we get

\(\begin{array}{l}A\sim\begin{bmatrix}1 & 2 & 1 \\0 & 1 & 1 \\0 & 0 & 3 \\\end{bmatrix}\end{array} \)

Clearly, rank of A = 3 = number of vectors.

Thus, the given vectors are linearly independent.

⇒ S forms the basis of R 3 .

Question 10:

Given a linear transformation T on V 3 ( R ) defined by T(a, b, c) = (2b + c, a – 4b, 3a) corresponding to the basis B = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}. Find the matrix representation of T.

Now, T(1, 0, 0) = (2 × 0 + 0, 1 – 4 × 0, 3 × 1) = (0, 1, 3)

= 0(1, 0, 0) + 1(0, 1, 0) + 3(0, 0, 1)

T(0, 1, 0) = (2 × 1 + 0, 0 – 4 × 1, 3 × 0) = (2, –4, 0)

= 2(1, 0, 0) –4(0, 1, 0) + 0(0, 0, 1)

And T(0, 0, 1) = (2 × 0 + 1, 0 – 4 × 0, 3 × 0) = (1, 0, 0)

= 1(1, 0, 0) + 0(0, 1, 0) + 0(0, 0, 1)

Then, the matrix representation of T with respect to the basis B is

\(\begin{array}{l}[T ; B] = \begin{bmatrix}0 & 2 & 1 \\1 & -4 & 0 \\3 & 0 & 0 \\\end{bmatrix}\end{array} \)

## Practice Problems on Linear Algebra

1. Show that the following matrix is diagonalizable:

\(\begin{array}{l}A=\begin{bmatrix}8 & -8 & -2 \\4 & -3 & -2 \\3 & -4 & 1 \\\end{bmatrix}\end{array} \)

2. Show that the transformation T: V 3 ( R ) → V 2 ( R ) defined by T(a, b, c) = (b, c) ∀ a, b, c ∈ R is a linear transformation.

3. Show that the given subset of vectors of R 3 forms a basis for V 3 ( R ).

{(1, 0, –1), (1, 2, 1), (0, –3, 2)}.

4. Given a linear transformation T on V 3 ( R ) defined by T(a, b, c) = (2b + c, a – 4b, 3a) corresponding to the basis B = {(1, 1, 1), (1, 1, 0), (1, 0, 0)}. Find the matrix representation of T.

To learn more linear algebra concepts and to practice more questions download BYJU’S – The Learning App and explore many more study resources with video lessons and personalized notes.

- Share Share

## Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

## Linear Algebra Examples

- Introduction to Matrices
- Complex Numbers
- Systems of Linear Equations
- Linear Independence and Combinations
- Vector Spaces
- Eigenvalues and Eigenvectors
- Linear Transformations
- Number Sets
- Terms ( Premium )
- DO NOT SELL MY INFO
- Mathway © 2024

Please ensure that your password is at least 8 characters and contains each of the following:

- a special character: @$#!%*?&

- school Campus Bookshelves
- menu_book Bookshelves
- perm_media Learning Objects
- login Login
- how_to_reg Request Instructor Account
- hub Instructor Commons
- Download Page (PDF)
- Download Full Book (PDF)
- Periodic Table
- Physics Constants
- Scientific Calculator
- Reference & Cite
- Tools expand_more
- Readability

selected template will load here

This action is not available.

## 1.4: Existence and Uniqueness of Solutions

- Last updated
- Save as PDF
- Page ID 63380

- Gregory Hartman et al.
- Virginia Military Institute

## Learning Objectives

- T/F: It is possible for a linear system to have exactly 5 solutions.
- T/F: A variable that corresponds to a leading 1 is “free.”
- How can one tell what kind of solution a linear system of equations has?
- Give an example (different from those given in the text) of a 2 equation, 2 unknown linear system that is not consistent.
- T/F: A particular solution for a linear system with infinite solutions can be found by arbitrarily picking values for the free variables.

So far, whenever we have solved a system of linear equations, we have always found exactly one solution. This is not always the case; we will find in this section that some systems do not have a solution, and others have more than one.

We start with a very simple example. Consider the following linear system: \[x-y=0. \nonumber \] There are obviously infinite solutions to this system; as long as \(x=y\) , we have a solution. We can picture all of these solutions by thinking of the graph of the equation \(y=x\) on the traditional \(x,y\) coordinate plane.

Let’s continue this visual aspect of considering solutions to linear systems. Consider the system \[\begin{align}\begin{aligned} x+y&=2\\ x-y&=0. \end{aligned}\end{align} \nonumber \] Each of these equations can be viewed as lines in the coordinate plane, and since their slopes are different, we know they will intersect somewhere (see Figure \(\PageIndex{1}\)(a)). In this example, they intersect at the point \((1,1)\) – that is, when \(x=1\) and \(y=1\) , both equations are satisfied and we have a solution to our linear system. Since this is the only place the two lines intersect, this is the only solution.

Now consider the linear system \[\begin{align}\begin{aligned} x+y&=1\\2x+2y&=2.\end{aligned}\end{align} \nonumber \] It is clear that while we have two equations, they are essentially the same equation; the second is just a multiple of the first. Therefore, when we graph the two equations, we are graphing the same line twice (see Figure \(\PageIndex{1}\)(b); the thicker line is used to represent drawing the line twice). In this case, we have an infinite solution set, just as if we only had the one equation \(x+y=1\) . We often write the solution as \(x=1-y\) to demonstrate that \(y\) can be any real number, and \(x\) is determined once we pick a value for \(y\) .

Figure \(\PageIndex{1}\): The three possibilities for two linear equations with two unknowns.

Finally, consider the linear system \[\begin{align}\begin{aligned} x+y&=1\\x+y&=2.\end{aligned}\end{align} \nonumber \] We should immediately spot a problem with this system; if the sum of \(x\) and \(y\) is 1, how can it also be 2? There is no solution to such a problem; this linear system has no solution. We can visualize this situation in Figure \(\PageIndex{1}\) (c); the two lines are parallel and never intersect.

If we were to consider a linear system with three equations and two unknowns, we could visualize the solution by graphing the corresponding three lines. We can picture that perhaps all three lines would meet at one point, giving exactly 1 solution; perhaps all three equations describe the same line, giving an infinite number of solutions; perhaps we have different lines, but they do not all meet at the same point, giving no solution. We further visualize similar situations with, say, 20 equations with two variables.

While it becomes harder to visualize when we add variables, no matter how many equations and variables we have, solutions to linear equations always come in one of three forms: exactly one solution, infinite solutions, or no solution. This is a fact that we will not prove here, but it deserves to be stated.

## Theorem \(\PageIndex{1}\)

Solution Forms of Linear Systems

Every linear system of equations has exactly one solution, infinite solutions, or no solution.

This leads us to a definition. Here we don’t differentiate between having one solution and infinite solutions, but rather just whether or not a solution exists.

## Definition: Consistent and Inconsistent Linear Systems

A system of linear equations is consistent if it has a solution (perhaps more than one). A linear system is inconsistent if it does not have a solution.

How can we tell what kind of solution (if one exists) a given system of linear equations has? The answer to this question lies with properly understanding the reduced row echelon form of a matrix. To discover what the solution is to a linear system, we first put the matrix into reduced row echelon form and then interpret that form properly.

Before we start with a simple example, let us make a note about finding the reduced row echelon form of a matrix.

In the previous section, we learned how to find the reduced row echelon form of a matrix using Gaussian elimination – by hand. We need to know how to do this; understanding the process has benefits. However, actually executing the process by hand for every problem is not usually beneficial. In fact, with large systems, computing the reduced row echelon form by hand is effectively impossible. Our main concern is what “the rref” is, not what exact steps were used to arrive there. Therefore, the reader is encouraged to employ some form of technology to find the reduced row echelon form. Computer programs such as Mathematica , MATLAB, Maple, and Derive can be used; many handheld calculators (such as Texas Instruments calculators) will perform these calculations very quickly.

As a general rule, when we are learning a new technique, it is best to not use technology to aid us. This helps us learn not only the technique but some of its “inner workings.” We can then use technology once we have mastered the technique and are now learning how to use it to solve problems.

From here on out, in our examples, when we need the reduced row echelon form of a matrix, we will not show the steps involved. Rather, we will give the initial matrix, then immediately give the reduced row echelon form of the matrix. We trust that the reader can verify the accuracy of this form by both performing the necessary steps by hand or utilizing some technology to do it for them.

Our first example explores officially a quick example used in the introduction of this section.

## Example \(\PageIndex{1}\)

Find the solution to the linear system

\[\begin{array}{ccccc} x_1 & +& x_2 & = & 1\\ 2x_1 & + & 2x_2 & = &2\end{array} . \nonumber \]

Create the corresponding augmented matrix, and then put the matrix into reduced row echelon form.

\[\left[\begin{array}{ccc}{1}&{1}&{1}\\{2}&{2}&{2}\end{array}\right]\qquad\overrightarrow{\text{rref}}\qquad\left[\begin{array}{ccc}{1}&{1}&{1}\\{0}&{0}&{0}\end{array}\right] \nonumber \]

Now convert the reduced matrix back into equations. In this case, we only have one equation, \[x_1+x_2=1 \nonumber \] or, equivalently, \[\begin{align}\begin{aligned} x_1 &=1-x_2\\ x_2&\text{ is free}. \end{aligned}\end{align} \nonumber \]

We have just introduced a new term, the word free . It is used to stress that idea that \(x_2\) can take on any value; we are “free” to choose any value for \(x_2\) . Once this value is chosen, the value of \(x_1\) is determined. We have infinite choices for the value of \(x_2\) , so therefore we have infinite solutions.

For example, if we set \(x_2 = 0\) , then \(x_1 = 1\) ; if we set \(x_2 = 5\) , then \(x_1 = -4\) .

Let’s try another example, one that uses more variables.

## Example \(\PageIndex{2}\)

Find the solution to the linear system \[\begin{array}{ccccccc} & &x_2&-&x_3&=&3\\ x_1& & &+&2x_3&=&2\\ &&-3x_2&+&3x_3&=&-9\\ \end{array}. \nonumber \]

To find the solution, put the corresponding matrix into reduced row echelon form.

\[\left[\begin{array}{cccc}{0}&{1}&{-1}&{3}\\{1}&{0}&{2}&{2}\\{0}&{-3}&{3}&{-9}\end{array}\right]\qquad\overrightarrow{\text{rref}}\qquad\left[\begin{array}{cccc}{1}&{0}&{2}&{2}\\{0}&{1}&{-1}&{3}\\{0}&{0}&{0}&{0}\end{array}\right] \nonumber \]

Now convert this reduced matrix back into equations. We have \[\begin{align}\begin{aligned} x_1 + 2x_3 &= 2 \\ x_2-x_3&=3 \end{aligned}\end{align} \nonumber \] or, equivalently, \[\begin{align}\begin{aligned} x_1 &= 2-2x_3 \\ x_2&=3+x_3\\x_3&\text{ is free.} \end{aligned}\end{align} \nonumber \]

These two equations tell us that the values of \(x_1\) and \(x_2\) depend on what \(x_3\) is. As we saw before, there is no restriction on what \(x_3\) must be; it is “free” to take on the value of any real number. Once \(x_3\) is chosen, we have a solution. Since we have infinite choices for the value of \(x_3\) , we have infinite solutions.

As examples, \(x_1 = 2\) , \(x_2 = 3\) , \(x_3 = 0\) is one solution; \(x_1 = -2\) , \(x_2 = 5\) , \(x_3 = 2\) is another solution. Try plugging these values back into the original equations to verify that these indeed are solutions. (By the way, since infinite solutions exist, this system of equations is consistent.)

In the two previous examples we have used the word “free” to describe certain variables. What exactly is a free variable? How do we recognize which variables are free and which are not?

Look back to the reduced matrix in Example \(\PageIndex{1}\). Notice that there is only one leading 1 in that matrix, and that leading 1 corresponded to the \(x_1\) variable. That told us that \(x_1\) was not a free variable; since \(x_2\) did not correspond to a leading 1, it was a free variable.

Look also at the reduced matrix in Example \(\PageIndex{2}\). There were two leading 1s in that matrix; one corresponded to \(x_1\) and the other to \(x_2\) . This meant that \(x_1\) and \(x_2\) were not free variables; since there was not a leading 1 that corresponded to \(x_3\) , it was a free variable.

We formally define this and a few other terms in this following definition.

## Definition: Dependent and Independent Variables

Consider the reduced row echelon form of an augmented matrix of a linear system of equations. Then:

a variable that corresponds to a leading 1 is a basic , or dependent , variable, and

a variable that does not correspond to a leading 1 is a free , or independent , variable.

One can probably see that “free” and “independent” are relatively synonymous. It follows that if a variable is not independent, it must be dependent; the word “basic” comes from connections to other areas of mathematics that we won’t explore here.

These definitions help us understand when a consistent system of linear equations will have infinite solutions. If there are no free variables, then there is exactly one solution; if there are any free variables, there are infinite solutions.

## Key Idea \(\PageIndex{1}\): Consistent Solution Types

A consistent linear system of equations will have exactly one solution if and only if there is a leading 1 for each variable in the system.

If a consistent linear system of equations has a free variable, it has infinite solutions.

If a consistent linear system has more variables than leading 1s, then the system will have infinite solutions.

A consistent linear system with more variables than equations will always have infinite solutions.

Key Idea \(\PageIndex{1}\) applies only to consistent systems. If a system is inconsistent , then no solution exists and talking about free and basic variables is meaningless.

When a consistent system has only one solution, each equation that comes from the reduced row echelon form of the corresponding augmented matrix will contain exactly one variable. If the consistent system has infinite solutions, then there will be at least one equation coming from the reduced row echelon form that contains more than one variable. The “first” variable will be the basic (or dependent) variable; all others will be free variables.

We have now seen examples of consistent systems with exactly one solution and others with infinite solutions. How will we recognize that a system is inconsistent? Let’s find out through an example.

## Example \(\PageIndex{3}\)

Find the solution to the linear system \[\begin{array}{ccccccc} x_1&+&x_2&+&x_3&=&1\\ x_1&+&2x_2&+&x_3&=&2\\ 2x_1&+&3x_2&+&2x_3&=&0\\ \end{array}. \nonumber \]

We start by putting the corresponding matrix into reduced row echelon form.

\[\left[\begin{array}{cccc}{1}&{1}&{1}&{1}\\{1}&{2}&{1}&{2}\\{2}&{3}&{2}&{0}\end{array}\right]\qquad\overrightarrow{\text{rref}}\qquad\left[\begin{array}{cccc}{1}&{0}&{1}&{0}\\{0}&{1}&{0}&{0}\\{0}&{0}&{0}&{1}\end{array}\right] \nonumber \]

Now let us take the reduced matrix and write out the corresponding equations. The first two rows give us the equations \[\begin{align}\begin{aligned} x_1+x_3&=0\\ x_2 &= 0.\\ \end{aligned}\end{align} \nonumber \] So far, so good. However the last row gives us the equation \[0x_1+0x_2+0x_3 = 1 \nonumber \] or, more concisely, \(0=1\) . Obviously, this is not true; we have reached a contradiction. Therefore, no solution exists; this system is inconsistent.

In previous sections we have only encountered linear systems with unique solutions (exactly one solution). Now we have seen three more examples with different solution types. The first two examples in this section had infinite solutions, and the third had no solution. How can we tell if a system is inconsistent?

A linear system will be inconsistent only when it implies that 0 equals 1. We can tell if a linear system implies this by putting its corresponding augmented matrix into reduced row echelon form. If we have any row where all entries are 0 except for the entry in the last column, then the system implies 0=1. More succinctly, if we have a leading 1 in the last column of an augmented matrix, then the linear system has no solution.

## Key Idea \(\PageIndex{2}\): Inconsistent Systems of Linear Equations

A system of linear equations is inconsistent if the reduced row echelon form of its corresponding augmented matrix has a leading 1 in the last column.

## Example \(\PageIndex{4}\)

Confirm that the linear system \[\begin{array}{ccccc} x&+&y&=&0 \\2x&+&2y&=&4 \end{array} \nonumber \] has no solution.

We can verify that this system has no solution in two ways. First, let’s just think about it. If \(x+y=0\) , then it stands to reason, by multiplying both sides of this equation by 2, that \(2x+2y = 0\) . However, the second equation of our system says that \(2x+2y= 4\) . Since \(0\neq 4\) , we have a contradiction and hence our system has no solution. (We cannot possibly pick values for \(x\) and \(y\) so that \(2x+2y\) equals both 0 and 4.)

Now let us confirm this using the prescribed technique from above. The reduced row echelon form of the corresponding augmented matrix is

\[\left[\begin{array}{ccc}{1}&{1}&{0}\\{0}&{0}&{1}\end{array}\right] \nonumber \]

We have a leading 1 in the last column, so therefore the system is inconsistent.

Let’s summarize what we have learned up to this point. Consider the reduced row echelon form of the augmented matrix of a system of linear equations. \(^{1}\) If there is a leading 1 in the last column, the system has no solution. Otherwise, if there is a leading 1 for each variable, then there is exactly one solution; otherwise (i.e., there are free variables) there are infinite solutions.

Systems with exactly one solution or no solution are the easiest to deal with; systems with infinite solutions are a bit harder to deal with. Therefore, we’ll do a little more practice. First, a definition: if there are infinite solutions, what do we call one of those infinite solutions?

## Definition: Particular Solution

Consider a linear system of equations with infinite solutions. A particular solution is one solution out of the infinite set of possible solutions.

The easiest way to find a particular solution is to pick values for the free variables which then determines the values of the dependent variables. Again, more practice is called for.

## Example \(\PageIndex{5}\)

Give the solution to a linear system whose augmented matrix in reduced row echelon form is

\[\left[\begin{array}{ccccc}{1}&{-1}&{0}&{2}&{4}\\{0}&{0}&{1}&{-3}&{7}\\{0}&{0}&{0}&{0}&{0}\end{array}\right] \nonumber \]

and give two particular solutions.

We can essentially ignore the third row; it does not divulge any information about the solution. \(^{2}\) The first and second rows can be rewritten as the following equations: \[\begin{align}\begin{aligned} x_1 - x_2 + 2x_4 &=4 \\ x_3 - 3x_4 &= 7. \\ \end{aligned}\end{align} \nonumber \] Notice how the variables \(x_1\) and \(x_3\) correspond to the leading 1s of the given matrix. Therefore \(x_1\) and \(x_3\) are dependent variables; all other variables (in this case, \(x_2\) and \(x_4\) ) are free variables.

We generally write our solution with the dependent variables on the left and independent variables and constants on the right. It is also a good practice to acknowledge the fact that our free variables are, in fact, free. So our final solution would look something like \[\begin{align}\begin{aligned} x_1 &= 4 +x_2 - 2x_4 \\ x_2 & \text{ is free} \\ x_3 &= 7+3x_4 \\ x_4 & \text{ is free}.\end{aligned}\end{align} \nonumber \]

To find particular solutions, choose values for our free variables. There is no “right” way of doing this; we are “free” to choose whatever we wish.

By setting \(x_2 = 0 = x_4\) , we have the solution \(x_1 = 4\) , \(x_2 = 0\) , \(x_3 = 7\) , \(x_4 = 0\) . By setting \(x_2 = 1\) and \(x_4 = -5\) , we have the solution \(x_1 = 15\) , \(x_2 = 1\) , \(x_3 = -8\) , \(x_4 = -5\) . It is easier to read this when are variables are listed vertically, so we repeat these solutions:

One particular solution is:

\[\begin{align}\begin{aligned} x_1 &= 4\\ x_2 &=0 \\ x_3 &= 7 \\ x_4 &= 0. \end{aligned}\end{align} \nonumber \]

Another particular solution is:

\[\begin{align}\begin{aligned} x_1 &= 15\\ x_2 &=1 \\ x_3 &= -8 \\ x_4 &= -5. \end{aligned}\end{align} \nonumber \]

## Example \(\PageIndex{6}\)

Find the solution to a linear system whose augmented matrix in reduced row echelon form is

\[\left[\begin{array}{ccccc}{1}&{0}&{0}&{2}&{3}\\{0}&{1}&{0}&{4}&{5}\end{array}\right] \nonumber \]

Converting the two rows into equations we have \[\begin{align}\begin{aligned} x_1 + 2x_4 &= 3 \\ x_2 + 4x_4&=5.\\ \end{aligned}\end{align} \nonumber \]

We see that \(x_1\) and \(x_2\) are our dependent variables, for they correspond to the leading 1s. Therefore, \(x_3\) and \(x_4\) are independent variables. This situation feels a little unusual, \(^{3}\) for \(x_3\) doesn’t appear in any of the equations above, but cannot overlook it; it is still a free variable since there is not a leading 1 that corresponds to it. We write our solution as: \[\begin{align}\begin{aligned} x_1 &= 3-2x_4 \\ x_2 &=5-4x_4 \\ x_3 & \text{ is free} \\ x_4 & \text{ is free}. \\ \end{aligned}\end{align} \nonumber \]

To find two particular solutions, we pick values for our free variables. Again, there is no “right” way of doing this (in fact, there are \(\ldots\) infinite ways of doing this) so we give only an example here.

\[\begin{align}\begin{aligned} x_1 &= 3\\ x_2 &=5 \\ x_3 &= 1000 \\ x_4 &= 0. \end{aligned}\end{align} \nonumber \]

\[\begin{align}\begin{aligned} x_1 &= 3-2\pi\\ x_2 &=5-4\pi \\ x_3 &= e^2 \\ x_4 &= \pi. \end{aligned}\end{align} \nonumber \]

(In the second particular solution we picked “unusual” values for \(x_3\) and \(x_4\) just to highlight the fact that we can.)

## Example \(\PageIndex{7}\)

Find the solution to the linear system \[\begin{array}{ccccccc}x_1&+&x_2&+&x_3&=&5\\x_1&-&x_2&+&x_3&=&3\\ \end{array} \nonumber \] and give two particular solutions.

The corresponding augmented matrix and its reduced row echelon form are given below.

\[\left[\begin{array}{cccc}{1}&{1}&{1}&{5}\\{1}&{-1}&{1}&{3}\end{array}\right]\qquad\overrightarrow{\text{rref}}\qquad\left[\begin{array}{cccc}{1}&{0}&{1}&{4}\\{0}&{1}&{0}&{1}\end{array}\right] \nonumber \]

Converting these two rows into equations, we have \[\begin{align}\begin{aligned} x_1+x_3&=4\\x_2&=1\\ \end{aligned}\end{align} \nonumber \] giving us the solution \[\begin{align}\begin{aligned} x_1&= 4-x_3\\x_2&=1\\x_3 &\text{ is free}.\\ \end{aligned}\end{align} \nonumber \]

Once again, we get a bit of an “unusual” solution; while \(x_2\) is a dependent variable, it does not depend on any free variable; instead, it is always 1. (We can think of it as depending on the value of 1.) By picking two values for \(x_3\) , we get two particular solutions.

\[\begin{align}\begin{aligned} x_1 &= 4\\ x_2 &=1 \\ x_3 &= 0 . \end{aligned}\end{align} \nonumber \]

\[\begin{align}\begin{aligned} x_1 &= 3\\ x_2 &=1 \\ x_3 &= 1 . \end{aligned}\end{align} \nonumber \]

The constants and coefficients of a matrix work together to determine whether a given system of linear equations has one, infinite, or no solution. The concept will be fleshed out more in later chapters, but in short, the coefficients determine whether a matrix will have exactly one solution or not. In the “or not” case, the constants determine whether or not infinite solutions or no solution exists. (So if a given linear system has exactly one solution, it will always have exactly one solution even if the constants are changed.) Let’s look at an example to get an idea of how the values of constants and coefficients work together to determine the solution type.

## Example \(\PageIndex{8}\)

For what values of \(k\) will the given system have exactly one solution, infinite solutions, or no solution? \[\begin{array}{ccccc}x_1&+&2x_2&=&3\\ 3x_1&+&kx_2&=&9\end{array} \nonumber \]

We answer this question by forming the augmented matrix and starting the process of putting it into reduced row echelon form. Below we see the augmented matrix and one elementary row operation that starts the Gaussian elimination process.

\[\left[\begin{array}{ccc}{1}&{2}&{3}\\{3}&{k}&{9}\end{array}\right]\qquad\overrightarrow{-3R_{1}+R_{2}\to R_{2}}\qquad\left[\begin{array}{ccc}{1}&{2}&{3}\\{0}&{k-6}&{0}\end{array}\right] \nonumber \]

This is as far as we need to go. In looking at the second row, we see that if \(k=6\) , then that row contains only zeros and \(x_2\) is a free variable; we have infinite solutions. If \(k\neq 6\) , then our next step would be to make that second row, second column entry a leading one. We don’t particularly care about the solution, only that we would have exactly one as both \(x_1\) and \(x_2\) would correspond to a leading one and hence be dependent variables.

Our final analysis is then this. If \(k\neq 6\) , there is exactly one solution; if \(k=6\) , there are infinite solutions. In this example, it is not possible to have no solutions.

As an extension of the previous example, consider the similar augmented matrix where the constant 9 is replaced with a 10. Performing the same elementary row operation gives

\[\left[\begin{array}{ccc}{1}&{2}&{3}\\{3}&{k}&{10}\end{array}\right]\qquad\overrightarrow{-3R_{1}+R_{2}\to R_{2}}\qquad\left[\begin{array}{ccc}{1}&{2}&{3}\\{0}&{k-6}&{1}\end{array}\right] \nonumber \]

As in the previous example, if \(k\neq6\) , we can make the second row, second column entry a leading one and hence we have one solution. However, if \(k=6\) , then our last row is \([0\ 0\ 1]\) , meaning we have no solution.

We have been studying the solutions to linear systems mostly in an “academic” setting; we have been solving systems for the sake of solving systems. In the next section, we’ll look at situations which create linear systems that need solving (i.e., “word problems”).

[1] That sure seems like a mouthful in and of itself. However, it boils down to “look at the reduced form of the usual matrix.”

[2] Then why include it? Rows of zeros sometimes appear “unexpectedly” in matrices after they have been put in reduced row echelon form. When this happens, we do learn something ; it means that at least one equation was a combination of some of the others.

[3] What kind of situation would lead to a column of all zeros? To have such a column, the original matrix needed to have a column of all zeros, meaning that while we acknowledged the existence of a certain variable, we never actually used it in any equation. In practical terms, we could respond by removing the corresponding column from the matrix and just keep in mind that that variable is free. In very large systems, it might be hard to determine whether or not a variable is actually used and one would not worry about it.

When we learn about s and s, we will see that under certain circumstances this situation arises. In those cases we leave the variable in the system just to remind ourselves that it is there.

## Linear Algebra

Linear algebra is a branch of mathematics that deals with linear equations and their representations in the vector space using matrices. In other words, linear algebra is the study of linear functions and vectors. It is one of the most central topics of mathematics. Most modern geometrical concepts are based on linear algebra.

Linear algebra facilitates the modeling of many natural phenomena and hence, is an integral part of engineering and physics. Linear equations, matrices, and vector spaces are the most important components of this subject. In this article, we will learn more about linear algebra and the various associated topics.

## What is Linear Algebra?

Linear algebra can be defined as a branch of mathematics that deals with the study of linear functions in vector spaces. When information related to linear functions is presented in an organized form then it results in a matrix. Thus, linear algebra is concerned with vector spaces, vectors, linear functions, the system of linear equations, and matrices. These concepts are a prerequisite for sister topics such as geometry and functional analysis.

## Linear Algebra Definition

The branch of mathematics that deals with vectors, matrics, finite or infinite dimensions as well as a linear mapping between such spaces is defined as linear algebra. It is used in both pure and applied mathematics along with different technical forms such as physics, engineering, natural sciences, etc.

## Branches of Linear Algebra

Linear algebra can be categorized into three branches depending upon the level of difficulty and the kind of topics that are encompassed within each. These are elementary, advanced, and applied linear algebra. Each branch covers different aspects of matrices, vectors, and linear functions.

## Elementary Linear Algebra

Elementary linear algebra introduces students to the basics of linear algebra. This includes simple matrix operations, various computations that can be done on a system of linear equations, and certain aspects of vectors. Some important terms associated with elementary linear algebra are given below:

Scalars - A scalar is a quantity that only has magnitude and not direction. It is an element that is used to define a vector space. In linear algebra, scalars are usually real numbers.

Vectors - A vector is an element in a vector space. It is a quantity that can describe both the direction and magnitude of an element.

Vector Space - The vector space consists of vectors that may be added together and multiplied by scalars.

Matrix - A matrix is a rectangular array wherein the information is organized in the form of rows and columns. Most linear algebra properties can be expressed in terms of a matrix.

Matrix Operations - These are simple arithmetic operations such as addition , subtraction , and multiplication that can be conducted on matrices.

## Advanced Linear Algebra

Once the basics of linear algebra have been introduced to students the focus shifts on more advanced concepts related to linear equations, vectors, and matrices. Certain important terms that are used in advanced linear algebra are as follows:

Linear Transformations - The transformation of a function from one vector space to another by preserving the linear structure of each vector space.

Inverse of a Matrix - When an inverse of a matrix is multiplied with the given original matrix then the resultant will be the identity matrix. Thus, A -1 A = I.

Eigenvector - An eigenvector is a non-zero vector that changes by a scalar factor (eigenvalue) when a linear transformation is applied to it.

Linear Map - It is a type of mapping that preserves vector addition and vector multiplication.

## Applied Linear Algebra

Applied linear algebra is usually introduced to students at a graduate level in fields of applied mathematics, engineering, and physics. This branch of algebra is driven towards integrating the concepts of elementary and advanced linear algebra with their practical implications. Topics such as the norm of a vector, QR factorization, Schur's complement of a matrix, etc., fall under this branch of linear algebra.

## Linear Algebra Topics

The topics that come under linear algebra can be classified into three broad categories. These are linear equations, matrices, and vectors. All these three categories are interlinked and need to be understood well in order to master linear algebra. The topics that fall under each category are given below.

## Linear Equations

A linear equation is an equation that has the standard form \(a_{1}x_{1} + a_{2}x_{2} + ... + a_{n}x_{n}\). It is the fundamental component of linear algebra. The topics covered under linear equations are as follows:

- Linear Equations in One variable
- Linear Equations in Two Variables
- Simultaneous Linear Equations
- Solving Linear Equations
- Solutions of a Linear Equation
- Graphing Linear Equations
- Applications of Linear equations
- Straight Line

In linear algebra, there can be several operations that can be performed on vectors such as multiplication , addition, etc. Vectors can be used to describe quantities such as the velocity of moving objects. Some crucial topics encompassed under vectors are as follows:

- Types of Vectors
- Dot Product
- Cross Product
- Addition of Vectors

A matrix is used to organize data in the form of a rectangular array. It can be represented as \(A_{m\times n}\). Here, m represents the number of rows and n denotes the number of columns in the matrix. In linear algebra, a matrix can be used to express linear equations in a more compact manner. The topics that are covered under the scope of matrices are as follows:

- Matrix Operations
- Determinant
- Transpose of a Matrix
- Types of a Matrix

## Linear Algebra Formula

Formulas form an important part of linear algebra as they help to simplify computations. The key to solving any problem in linear algebra is to understand the formulas and associated concepts rather than memorize them. The important linear algebra formulas can be broken down into 3 categories, namely, linear equations, vectors, and matrices.

Linear Equations: The important linear equation formulas are listed as follows:

- General form: ax + by = c
- Slope Intercept Form : y = mx + b
- a + b = b + a
- a + 0 = 0 + a = a

Vectors: If there are two vectors \(\overrightarrow{u}\) = (\(u_{1}\), \(u_{2}\), \(u_{3}\)) and \(\overrightarrow{v}\) = (\(v_{1}\), \(v_{2}\), \(v_{3}\)) then the important vector formulas associated with linear algebra are given below.

- \(\overrightarrow{u} + \overrightarrow{v} = (u_{1}+v_{1}, u_{2}+v_{2}, u_{3}+v_{3})\)
- \(\overrightarrow{u} - \overrightarrow{v} = (u_{1}-v_{1}, u_{2}-v_{2}, u_{3}-v_{3})\)
- \(\left \| u \right \| = \sqrt{u_{1}^{2} + u_{2}^{2} + u_{3}^{2}}\)
- \(\overrightarrow{u}.\overrightarrow{v} = u_{1}v_{1} + u_{2}v_{2} + u_{3}v_{3}\)
- \(\overrightarrow{u}\times \overrightarrow{v} = (u_{2}v_{3}-u_{3}v_{2}, u_{3}v_{1}-u_{1}v_{3}, u_{1}v_{2}-u_{2}v_{1})\)

Matrix: If there are two square matrices given by A and B where the elements are \(a_{ij}\) and \(b_{ij}\) respectively, then the following important formulas are used in linear algebra:

- C = A + B, where \(c_{ij}\) = \(a_{ij}\) + \(b_{ij}\)
- C = A - B, where \(c_{ij}\) = \(a_{ij}\) - \(b_{ij}\)
- kA = k\(a_{ij}\)
- C = AB = \(\sum_{k = 1}^{n}a_{ik}b_{kj}\)

## Linear Algebra and its Applications

Linear algebra is used in almost every field. Simple algorithms also make use of linear algebra topics such as matrices. Some of the applications of linear algebra are given as follows:

- Signal Processing - Linear algebra is used in encoding and manipulating signals such as audio and video signals. Furthermore, it is required in the analysis of such signals.
- Linear Programming - It is an optimizing technique that is used to determine the best outcome of a linear function.
- Computer Science - Data scientists use several linear algebra algorithms to solve complicated problems.
- Prediction Algorithms - Prediction algorithms use linear models that are developed using concepts of linear algebra.

Related Articles:

- Introduction to Graphing
- One Variable Linear Equations and Inequalities
- Resolving a Vector into Components

Important Notes on Linear Algebra

- Linear algebra is concerned with the study of three broad subtopics - linear functions, vectors, and matrices
- Linear algebra can be classified into 3 categories. These are elementary, advanced, and applied linear algebra.
- Elementary linear algebra is concerned with the introduction to linear algebra. Advanced linear algebra builds on these concepts. Applied linear algebra applies these concepts to real-life situations.

## Linear Algebra Examples

- Example 1: Using linear algebra add these two matrices. A = \(\begin{bmatrix} 5 & 6\\ 2& 1 \end{bmatrix}\) and B = \(\begin{bmatrix} 3 & 7\\ 5& 4 \end{bmatrix}\) Solution: C = A + B C = \(\begin{bmatrix} 5 & 6\\ 2& 1 \end{bmatrix}\) + \(\begin{bmatrix} 3 & 7\\ 5& 4 \end{bmatrix}\) C = \(\begin{bmatrix} 8 & 13\\ 7& 5 \end{bmatrix}\) Answer: C = \(\begin{bmatrix} 8 & 13\\ 7& 5 \end{bmatrix}\)
- Example 2: Subtract the two vectors \(\vec{u}\) = (3, 7, 1) and \(\vec{v}\) = (6, 2, 8) using linear algebra Solution: \(\vec{u}\) - \(\vec{v}\) = (-3, 5, -7) Answer: (-3, 5, -7)
- Example 3: Solve the equations: x + 3 = 2(y - 1) and y + 1 = 5x Solution: Solving by substitution, x + 3 = 2(y - 1) x = 2y - 5 Putting this value in the second equation, y + 1 = 5 (2y - 5) y = 26 / 9 Now y + 1 = 5x (26 / 9) + 1 = 5x x = 7 / 9 Answer: x = 7 / 9, y = 26 / 9

go to slide go to slide go to slide

Book a Free Trial Class

## Practice Questions on Linear Algebra

go to slide go to slide

## FAQs on Linear Algebra

What is the meaning of linear algebra.

Linear algebra is a branch of mathematics that deals with the study of linear functions , vectors, matrices, and other associated aspects.

## Is Linear Algebra Difficult?

Linear algebra is a very vast branch of mathematics. However, with regular practice and instilling a strong conceptual foundation solving questions will be very easy.

## What are the Prerequisites for Linear Algebra?

It is necessary to have a strong foundation regarding the properties of numbers and how to perform calculations before starting linear algebra.

## What is a Subspace in Linear Algebra?

A vector space that is entirely contained in another vector space is known as a subspace in linear algebra.

## How to Study Linear Algebra?

The first step is to instill a strong foundation in elementary algebra. Understanding concepts and regular revision of formulas are also crucial before moving on to advanced algebra. It is equally necessary to solve practice questions of various levels to succeed in this subject.

## Is Linear Algebra Harder than Calculus?

Linear algebra serves as a prerequisite for calculus . It is important to develop deep-seated knowledge of this subject before moving on to calculus. Both subjects are easy as long as concepts are clear and sums are practiced regularly.

## What is Linear Algebra Used for?

Linear algebra is used in several industries such as computer science, engineering as well as physics to create linear models using the algorithms outlined in this subject.

Problems in Mathematics

- The Cayley-Hamilton Theorem
- ( The Cayley-Hamilton Theorem ) If $p(t)$ is the characteristic polynomial for an $n\times n$ matrix $A$, then the matrix $p(A)$ is the $n \times n$ zero matrix.

Let $A=\begin{bmatrix} 1& 1 \\ 1& 3 \end{bmatrix}$. The characteristic polynomial $p(t)$ of $A$ is \begin{align*} p(t)&=\det(A-tI)=\begin{bmatrix} 1-t& 1 \\ 1& 3-t \end{bmatrix} \\ &=t^2-4t+2. \end{align*}

Then the Cayley-Hamilton theorem says that the matrix $p(A)=A^2-4A+2I$ is the $2\times 2$ zero matrix. In fact, we can directly check this: \begin{align*} p(A)&=A^2-4A+2I=\begin{bmatrix} 1& 1 \\ 1& 3 \end{bmatrix}\begin{bmatrix} 1& 1 \\ 1& 3 \end{bmatrix}-4\begin{bmatrix} 1& 1 \\ 1& 3 \end{bmatrix}+2\begin{bmatrix} 1& 0\\ 0& 1 \end{bmatrix}\\[6pt] &=\begin{bmatrix} 2& 4 \\ 4& 10 \end{bmatrix} +\begin{bmatrix} -4& -4 \\ -4& -12 \end{bmatrix} +\begin{bmatrix} 2& 0 \\ 0& 2 \end{bmatrix} =\begin{bmatrix} 0& 0 \\ 0& 0 \end{bmatrix}. \end{align*}

( The Ohio State University )

- Find the inverse matrix of the matrix $A=\begin{bmatrix} 1 & 1 & 2 \\ 9 &2 &0 \\ 5 & 0 & 3 \end{bmatrix}$ using the Cayley–Hamilton theorem.
- Find the inverse matrix of the $3\times 3$ matrix $A=\begin{bmatrix} 7 & 2 & -2 \\ -6 &-1 &2 \\ 6 & 2 & -1 \end{bmatrix}$ using the Cayley-Hamilton theorem.
- Let \[A=\begin{bmatrix} 1 & -1\\ 2& 3 \end{bmatrix}.\] Find the eigenvalues and the eigenvectors of the matrix \[B=A^4-3A^3+3A^2-2A+8E.\] ( Nagoya University )
- Let $A, B$ be complex $2\times 2$ matrices satisfying the relation $A=AB-BA$. Prove that $A^2=O$, where $O$ is the $2\times 2$ zero matrix.
- In each of the following cases, can we conclude that $A$ is invertible? If so, find an expression for $A^{-1}$ as a linear combination of positive powers of $A$. If $A$ is not invertible, explain why not. (a) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=0$. (b) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=-1$.
- Suppose that $A$ is $2\times 2$ matrix that has eigenvalues $-1$ and $3$. Then for each positive integer $n$ find $a_n$ and $b_n$ such that $A^{n+1}=a_nA+b_nI$, where $I$ is the $2\times 2$ identity matrix.
- Suppose that the $2 \times 2$ matrix $A$ has eigenvalues $4$ and $-2$. For each integer $n \geq 1$, there are real numbers $b_n , c_n$ which satisfy the relation $A^{n} = b_n A + c_n I$, where $I$ is the identity matrix. Find $b_n$ and $c_n$ for $2 \leq n \leq 5$, and then find a recursive relationship to find $b_n, c_n$ for every $n \geq 1$.

(1) There exists a vector $\mathbf{v}\in \C^n$ such that \[\mathbf{v}, A\mathbf{v}, A^2\mathbf{v}, \dots, A^{n-1}\mathbf{v}\] form a basis of $\C^n$. (2) There exists an invertible matrix $S$ such that $S^{-1}AS=C$. (Namely, $A$ is similar to the companion matrix of its characteristic polynomial.)

- Let $n>1$ be a positive integer. Let $V=M_{n\times n}(\C)$ be the vector space over the complex numbers $\C$ consisting of all complex $n\times n$ matrices. The dimension of $V$ is $n^2$. Let $A \in V$ and consider the set \[S_A=\{I=A^0, A, A^2, \dots, A^{n^2-1}\}\] of $n^2$ elements. Prove that the set $S_A$ cannot be a basis of the vector space $V$ for any $A\in V$.
- Let $A$ be a $3\times 3$ real orthogonal matrix with $\det(A)=1$. (a) If $\frac{-1+\sqrt{3}i}{2}$ is one of the eigenvalues of $A$, then find the all the eigenvalues of $A$. (b) Let $A^{100}=aA^2+bA+cI$, where $I$ is the $3\times 3$ identity matrix. Using the Cayley-Hamilton theorem, determine $a, b, c$. ( Kyushu University )
- Let $A$ and $B$ be $2\times 2$ matrices such that $(AB)^2=O$, where $O$ is the $2\times 2$ zero matrix. Determine whether $(BA)^2$ must be $O$ as well. If so, prove it. If not, give a counter example.
- Introduction to Matrices
- Elementary Row Operations
- Gaussian-Jordan Elimination
- Solutions of Systems of Linear Equations
- Linear Combination and Linear Independence
- Nonsingular Matrices
- Inverse Matrices
- Subspaces in $\R^n$
- Bases and Dimension of Subspaces in $\R^n$
- General Vector Spaces
- Subspaces in General Vector Spaces
- Linearly Independency of General Vectors
- Bases and Coordinate Vectors
- Dimensions of General Vector Spaces
- Linear Transformation from $\R^n$ to $\R^m$
- Linear Transformation Between Vector Spaces
- Orthogonal Bases
- Determinants of Matrices
- Computations of Determinants
- Introduction to Eigenvalues and Eigenvectors
- Eigenvectors and Eigenspaces
- Diagonalization of Matrices
- Dot Products and Length of Vectors
- Eigenvalues and Eigenvectors of Linear Transformations
- Jordan Canonical Form

## Subspaces - Examples with Solutions

Definiton of subspaces.

If W is a subset of a vector space V and if W is itself a vector space under the inherited operations of addition and scalar multiplication from V, then W is called a subspace. 1 , 2 To show that the W is a subspace of V, it is enough to show that

- W is a subset of V
- The zero vector of V is in W
- For any vectors u and v in W, u + v is in W. (closure under additon)
- For any vector u and scalar r , the product r · u is in W. (closure under scalar multiplication).

## Examples of Subspaces

Example 1 \( \) \( \) \( \) The set W of vectors of the form \( (x,0) \) where \( x \in \mathbb{R} \) is a subspace of \( \mathbb{R}^2 \) because: W is a subset of \( \mathbb{R}^2 \) whose vectors are of the form \( (x,y) \) where \( x \in \mathbb{R} \) and \( y \in \mathbb{R} \) The zero vector \( (0,0)\) is in W \( (x_1,0) + (x_2,0) = (x_1 + x_2 , 0) \) , closure under addition \( r \cdot (x,0) = (r x , 0) \) , closure under scalar multiplication

Example 2 The set W of vectors of the form \( (x,y) \) such that \( x \ge 0 \) and \( y \ge 0 \) is not a subspace of \( \mathbb{R}^2 \) because it is not closed under scalar multiplication. Vector \( \textbf{u} = (2,2) \) is in W but its negative \( -1(2,2) = (-2,-2) \) is not in W.

Example 3 The set W of vectors of the form \( W = \{ (x,y,z) | x + y + z = 0 \} \) is a subspace of \( \mathbb{R}^3 \) because 1) It is a subset of \( \mathbb{R}^3 = \{ (x,y,z) \} \) 2) The vector \( (0,0,0) \) is in W since \( 0 + 0 + 0 = 0 \) 3) Let \( \textbf{u} = (x_1 , y_1 , z_1) \) and \( \textbf{v} = (x_2 , y_2 , z_2) \) be vectors in W. Hence \( x_1 + y_1 + z_1 = 0 \) and \( x_2 + y_2 + z_2 = 0 \) \( (x_1 , y_1 , z_1) + (x_2 , y_2 , z_2) \\\\ \quad = (x_1+x_2 , y_1+y_2 , z_1+z_2) \\\\ \quad = (x_1+x_2) + (y_1+y_2) + (z_1+z_2) \\\\ \quad = (x_1+y_1+z_1) + (x_2+y_2+z_2) = 0 + 0 = 0 \) hence closure under addition. 4) Let \( r \) be a real number \( r (x_1 , y_1 , z_1) = (r x_1 , r y_1 , r z_1) \) \( r x_1 + r y_1 + r z_1 \\\\ \quad = r( x_1 + y_1 + z_1 ) \\\\ \quad = r \cdot 0 = 0 \) hence closure under scalar multiplication

## More References and links

- Vector Spaces - Questions with Solutions
- Linear Algebra and its Applications - 5 th Edition - David C. Lay , Steven R. Lay , Judi J. McDonald
- Elementary Linear Algebra - 7 th Edition - Howard Anton and Chris Rorres

## IMAGES

## VIDEO

## COMMENTS

Problems 12 2.4. Answers to Odd-Numbered Exercises14 Chapter 3. ELEMENTARY MATRICES; DETERMINANTS15 3.1. Background 15 3.2. Exercises 17 3.3. Problems 22 ... linear algebra class such as the one I have conducted fairly regularly at Portland State University. There is no assigned text. Students are free to choose their own sources of information.

Linear algebra questions with solutions and detailed explanations on matrices , spaces, subspaces and vectors , determinants , systems of linear equations and online linear algebra calculators are included. Matrices Matrices with Examples and Questions with Solutions . Transpose of a Matrix . Symmetric Matrix . Identity Matrix . Diagonal Matrices .

Linear Algebra Problems Math 504 { 505 Jerry L. Kazdan Topics 1 Basics 2 Linear Equations 3 Linear Maps 4 Rank One Matrices 5 Algebra of Matrices ... The only solution of the homogeneous equations Ax= 0 is x= 0. f) The linear transformation T A: Rn!Rn de ned by Ais 1-1. g) The linear transformation T

2 Solutions to Problem Sets Problem Set 1.1, page 6 1c =ma and d mb lead to ad = amb = bc. With no zeros, ad = bc is the equation for a 2×2 matrix to have rank 1. 2The three edges going around the triangle are u = (5,0),v = (−5,12),w = (0,−12). Their sum is u + v + w = (0,0). Their lengths are ||u||= 5,||v||= 13,||w||= 12.

Linear Algebra: Graduate Level Problems and Solutions Igor Yanovsky 1. Linear Algebra Igor Yanovsky, 2005 2 Disclaimer: This handbook is intended to assist graduate students with qualifying examination preparation. Please be aware, however, that the handbook might contain, ... 1.2 Linear Maps as Matrices Example.

Here is a set of practice problems to accompany the Linear Equations section of the Solving Equations and Inequalities chapter of the notes for Paul Dawkins Algebra course at Lamar University.

1 Problems: What is Linear Algebra 3 2 Problems: Gaussian Elimination 7 3 Problems: Elementary Row Operations 12 4 Problems: Solution Sets for Systems of Linear Equations 15 5 Problems: Vectors in Space, n-Vectors 20 6 Problems: Vector Spaces 23 7 Problems: Linear Transformations 28 8 Problems: Matrices 31 9 Problems: Properties of Matrices 37

Unit 2: Matrix transformations. Functions and linear transformations Linear transformation examples Transformations and matrix multiplication. Inverse functions and transformations Finding inverses and determinants More determinant depth Transpose of a matrix.

Linear Algebra is a systematic theory regarding the solutions of systems of linear equations. Example 1.2.1. Let us take the following system of two linear equations in the two unknowns x1 x 1 and x2 x 2 : 2x1 +x2 x1 −x2 = 0 = 1}. 2 x 1 + x 2 = 0 x 1 − x 2 = 1 }. This system has a unique solution for x1,x2 ∈ R x 1, x 2 ∈ R, namely x1 ...

Math > Algebra 1 > Forms of linear equations > Intro to slope-intercept form Linear equations word problems Google Classroom Ever since Renata moved to her new home, she's been keeping track of the height of the tree outside her window. H represents the height of the tree (in centimeters), t years since Renata moved in. H = 210 + 33 t

MIT18_06SCF11_Ses3.5sol.pdf. pdf. MIT18_06SCF11_Ses3.6sol.pdf. pdf. MIT18_06SCF11_Ses3.7sol.pdf. MIT OpenCourseWare is a web based publication of virtually all MIT course content. OCW is open and available to the world and is a permanent MIT activity.

13 Standard Matrix 3 Linear Transformations One-to-One and Onto Matrix Algebra 11 Addition, Scalar Multiplication and Transposition 31 Matrix Operations 3 Elementary Matrices 14 Inverse matrices (Theory) 20 Inverse Matrices (Computing the Inverse) 4 Partitioned Matrices 6 LU-Factorization Subspaces 11 Subspaces Definition and Properties

Question 1: Show that the matrix A is unitary matrix \ (\begin {array} {l}A=\frac {1} {5}\begin {bmatrix}-1+2i & -4-2i \\ 2-4i& -2-i \\\end {bmatrix}\end {array} \) Solution: A matrix is said to be unitary if and only if AA* = A*A = I, where A* is the transpose of the conjugate of A. Given,

Recipe 1: Compute a Least-Squares Solution. Let A be an m × n matrix and let b be a vector in Rn. Here is a method for computing a least-squares solution of Ax = b: Compute the matrix ATA and the vector ATb. Form the augmented matrix for the matrix equation ATAx = ATb, and row reduce.

Algebra (all content) 20 units · 412 skills. Unit 1 Introduction to algebra. Unit 2 Solving basic equations & inequalities (one variable, linear) Unit 3 Linear equations, functions, & graphs. Unit 4 Sequences. Unit 5 System of equations. Unit 6 Two-variable inequalities. Unit 7 Functions. Unit 8 Absolute value equations, functions, & inequalities.

Linear Algebra Problems and Solutions. Popular topics in Linear Algebra are Vector Space Linear Transformation Diagonalization Gauss-Jordan Elimination Inverse Matrix Eigen Value Caley-Hamilton Theorem Caley-Hamilton Theorem. ... Linear Algebra Problems by Topics. The list of linear algebra problems is available here.

Step-by-Step Examples Linear Algebra Vectors Introduction to Matrices Matrices Complex Numbers Systems of Linear Equations Linear Independence and Combinations Vector Spaces Eigenvalues and Eigenvectors Linear Transformations Number Sets

Key Idea 1.4.1: Consistent Solution Types. A consistent linear system of equations will have exactly one solution if and only if there is a leading 1 for each variable in the system. If a consistent linear system of equations has a free variable, it has infinite solutions. If a consistent linear system has more variables than leading 1s, then ...

Linear algebra is a branch of mathematics that deals with linear equations and their representations in the vector space using matrices. In other words, linear algebra is the study of linear functions and vectors. It is one of the most central topics of mathematics. Most modern geometrical concepts are based on linear algebra.

The definition of vector spaces in linear algebra is presented along with examples and their detailed solutions. Vector Spaces - Examples with Solutions ... Classifying sets by their properties helps in solving problems involving different king of mathematical objects such as matrices, polynomials, 2-d vectors, 3-d vectors, n-d vectors, planes ...

A = [ 2 − 1 − 1 − 1 2 − 1 − 1 − 1 2]. Determine whether the matrix A is diagonalizable. If it is diagonalizable, then diagonalize A . Let A be an n × n matrix with the characteristic polynomial. p(t) = t3(t − 1)2(t − 2)5(t + 2)4. Assume that the matrix A is diagonalizable. (a) Find the size of the matrix A.

Let V =Mn×n(C) V = M n × n ( C) be the vector space over the complex numbers C C consisting of all complex n × n n × n matrices. The dimension of V V is n2 n 2. Let A ∈ V A ∈ V and consider the set. of n2 n 2 elements. Prove that the set SA S A cannot be a basis of the vector space V V for any A ∈ V A ∈ V.

Subspaces - Examples with Solutions Definiton of Subspaces. If W is a subset of a vector space V and if W is itself a vector space under the inherited operations of addition and scalar multiplication from V, then W is called a subspace.1, 2 To show that the W is a subspace of V, it is enough to show that W is a subset of V The zero vector of V ...