The ‘span’ of two vectors is the set of all of their linear combinations. av+bu where a and b are scalars.
It’s basically the set of all the vectors reachable with u and u as basis vectors.
The span of a single vector is simply the line through the vector in the entire plane, i.e., λv
The span of 2 vectors in a 3D space is the plane passing through them - given the vectors are not both 0 or same vector but scaled differently. This redundancy is called linear dependence - or that one vector is a linear combination of the others. In that case, their span is a 1D line.
Basis of a vector space is a set of linearly independent vectors that span the full space. In 2D, we have i^ and j^ unit vectors forming a basis. Add k^ to get 3D space.
Linear Transformations
Linear if - all lines remain lines - and the origin remains fixed in place. ( No bias term)
Or more formally, if f(a+b)=f(a)+f(b) and f(ca)=cf(a) then f is a linear transform.
Given some basis vectors, any vector under any linear transform still is the same linear combination of the original basis vectors now transformed. So if v=5i^−2j^, it still remains the same, with now the basis vectors i and j are transformed.
Matrix × Vector is basically a transform on the basis of the vector.
Affine transformation is Linear transformation plus possibility of shifting the origin too - so can have a bias term. So Linear transforms are a subset of affine transform.
Eigenvectors and Eigenvalues
eigenvector is a vector that under a transformation A remains on its span. If its magnitude changes by a scalar λ, then λ is called the eigenvalue.
Av=λvAv−λIv=0(A−λI)v=0det(A−λI)=0
Eigenbasis - when the basis vectors themselves are eigenvectors under the given transformation.
This means if we make our basis into two eigenvectors, the transformation matrix will be diagonal and will be trivial to take powers of.