Definition of rank and its characterstics by column vectors with a 3×4 matrix

RANK of matrix

The column rank of a matrix A is The maximal number of linearly independent columns of A. Likew is e, The row rank is The maximal number of linearly independent rows of A.

Since The column rank and The row rank are always equal, The y are simply called The rank of A. More abstractly, it is The dimension of The image of A.

The rank of an matrix is at mostmin(m,n). A matrix that has a rank as large as possible is said to havefull rank; o The rw is e, The matrix is rank deficient.

More generally, if a linear operator on a vector space (possibly infinite-dimensional) has finite-dimensional range (e.g., afinite rank operator) The n The rank of The operator is defined to The dimension of The range.

More definations of rank of matrix

if one considers The matrix A

f:Fm→Fn

with The rule

f(x) =Ax

The n The rank of Acan also be defined as The dimension of The image off. Th is definition has The advantage that it can be applied to any linear map without need for a spec if ic matrix . The rank can also be defined asmminus The dimension of The kernel off; The rank-nullity The oremstates that th is is The same as The dimension of The image off.

Column rank – dimension of column space

The maximal number of linearly independent columns of The m-n matrix Awith entries in The fieldF is equal to The dimension of The columns pace of A( The column space being The subspace offmgenerated by The columns of A, which is in fact just The image of A).

Row rank – dimension of row space

Since The column rank and The row rank are The same, we can also define The rank of Aas The dimension of The row space of A, or The number of rows in a bas is of The row space.

Decomposition rank

The rank can also be characterized as The decomposition rank: The minimumksuch thatAcan be factored asA=CR, whereC is anm-k matrix andR is ak-n matrix . Like The “dimension of image” characterization th is can be generalized to a definition of The rank of a linear map: The rank of a linear mapffromV→W is The minimal dimensionk of an intermediate spaceXsuch thatfcan be written as The composition of a mapV→Xand a mapX→W. While th is definition does not suggest an efficient manner to compute The rank (for which it is better to use one of The alternative definitions), it does allow to easily understand many of The properties of The rank, for instance that The rank of The transpose of A is The same as that of A.

Determinantal rank – size of largest non-van is hing minor

Ano The r equivalent definition of The rank of a matrix is The greatest order of any non-zerominorin The matrix ( The order of a minor being The size of The square sub- matrix of which it is The determinant). Like The decomposition rank characterization, th is does not give an efficient way of computing The rank, but it is useful The oretically: a single non-zero minor witnesses a lower bound (namely its order) for The rank of The matrix , which can be useful to prove that certain operations do not lower The rank of a matrix .

Read also  Traffic light controller using microprocessor

Tensor rank – minimum number of simple tensors

The rank of a square matrix can also be characterized as The tensor rank: The minimum number of simple tensors(rank 1 tensors) needed to expressAas a linear combination,. Here a rank 1 tensor ( matrix product of a column vector and a row vector) is The same thing as a rank 1 matrix of The given size. Th is interpretation can be generalized in The separable modelsinterpretation of The singular value decomposition

Properties of rank of matrix .

* only a zero matrix has rank zero.

* f is injective if and only if Ahas rankn(in th is case, we say thatAhasfull column rank).

* f is surjective if and only if Ahas rankm(in th is case, we say thatAhasfull row rank).

* In The case of a square matrix A(i.e.,m=n), The nA is invertible if and only if Ahas rankn(that is ,Ahas full rank).

* if B is anyn-by-k matrix , The n

* As an example of The “<” case, consider The product

* Both factors have rank 1, but The product has rank 0.

* if B is ann-by-k matrix with rankn, The n

* if C is anl-by-m matrix with rankm, The n

* The rank of A is equal tor if and only if The re ex is ts an invertiblem-by-m matrix Xand an invertiblen-by-n matrix Ysuch that

* where Irdenotes The r-by-ridentity matrix

* Sylvester’s rank inequality: if A is am-by-n matrix andBn-by-k, The n

* Th is is a special case of The next inequality.

* The inequality due to Frobenius: if AB,ABCandBCare defined, The n

* Subadditivity:when A and B are of The same dimension. As a consequence, a rank-k matrix can be written as The sum of k rank-1 matrices, but not fewer.

* The rank of a matrix plus The nulity of The matrix equals The number of columns of The matrix (th is is The “rank The orem” or The “rank nullity thoerm”)

* The rank of a matrix and The rank of its correspondingGram matrix are equal

* Th is can be shown by proving equality of The irnul spaces. Null space of The Gram matrix is given by vectorsxfor whichATAx= 0. if th is condition is fulfilled, also holds0 =xTATAx= |Ax|2. Th is pro of was adapted from.

* if A*denotes The conjugate transpose of A(i.e., The adjoint of A), The n Linear dependence

* The square matrix is linear dependence on The row vectors implies that of The colummn vectors.

* The vectors x1,x2,x3……..xn are said to be linear dependent , if The re ex is t r numbers

* s1,s2,s3…….sn ,not all zero,such that

* S1x1+s2x2+s3x3+……..+SrXr

* if such no numbers, o The r than zero, ex is t, The se are said to be linearly dependent.

* if a given matrix has r independent vectors ( rows and columns ) and The remaining vectors are linear combinations of The se r vectors, The n rank of matrix is r.

* if a matrix is of rank r, it contains r linearly independent vectors and remaning vectors can be expressed as linear combinations of The se vectors. For a non square matrix ei The r The row vectors or The column vectors must always be linearly dependent.

Pro of

Let a1, a2, _ _ _ , am be a set of column vectors of dimension n _ 1, and let _1, _2,_ _ _m

Read also  Static Force Analysis In Screw Jack Engineering Essay

be a set of scalar weights. The n The vector c de.ned by

a

c =Xm

i=1_iai is called a linear combination of The vectors a1, a2, _ _ _ , am.

Under what conditions on The weights _i will th is linear combination be equalto The n_1

zero column vector 0n? Clearly th is will be The case if all The weightsare zero, _i = 0, 8i.

if th is is The only condition under which c = 0 The n The

vectors a1, a2, _ _ _ , am are called linearly independent . However, if The re are values

for _i such that

i=1 _iai = 0 where at least one _i 6= 0, The n The vectors ai are said to be linearly dependent.

if a set of vectors are linearly dependent, The n it is possible to write one of The vectors as a linear combination of The o The rs. For example if

i=1 _iai = 0 with_j 6= 0 The naj = 1_jXmi=1i6=j_iai :

Note that if m > n, The n The set of m column vectors a1, a2, _ _ _ , am must be linearly dependent. Similarly, if any vector is equal to 0n, The n The set of vectors must be linearly dependent.

* Given a set of vectors vi look for combinations :

* c1v1 + c2v2 + c3v3 + … + cnvn = 0

* if The re ex is ts any solution o The r than ci = 0 for all coefficients ci The vectors are linearly dependent – it is possible to express some of The vectors as linear combinations of o The rs.

* When an n x m matrix is reduced to echelon form The non zero rows are independent and The columns with pivots are independent .

* A set of vectors in Rm must be linearly dependent if n > m.

* if a vector space V cons is ts of all linear combinations of particular vectors w1, w2, w3, …, wn The n those vectors span The space. Every vector in V can be expressed as a combination of The wi.

* To decide whe The r b is a combination of The columns of a matrix A composed of The column vectors wi, solve A.x = b.

* To decide whe The r The columns of A are independent (i.e. wi are linearly independent ) The n solve A.x = 0. if a nontrivial solution can be found The y are dependent.

* Spanning involves The column space; independence involves The null space.

* A bas is for a vector space is a set of vectors : it is linearly independent and it spans The space.

* Every vector in The space is a combination of The bas is vectors because The y span The space. Every combination is unique because The bas is vectors are linearly independent .

* Any two bases for The same space contain The same number of vectors. Th is is The dimension of The space.

– if x¢ is a market equilibrium choice of activity levels for some prices p¢ The n ¢ solves Δ.

We now show The converse: suppose that x¢ solves Δ and that The vectors ai with i ƒŽ C(x¢) are linearly independent . The n x¢ is a market equilibrium for some p¢.

We will use two equivalent definitions of linear independence: The vectors a1, …, at are linearly independent if and only if Ï€1a1 + … + Ï€tat = 0 implies Ï€1 = … = Ï€t = 0, or The vectors a1, …, at are linearly independent if and only if a1-x = b1, …, at-x = bt has a solution for any values for b1, …, bt. A set of vectors is linearly dependent when The vectors are not linearly independent . With linearly dependent vectors a1, …, at The above system of equations almost never has a solution. That is , for almost every value of The right hand side variables b1, …, bt The system has no solution. The linear independence assumption is The refore mild; The re are almost no levels for The resource endowments such that if t constraints bind The n The t vectors a1, …, at are linearly dependent. Although we will not show th is , th is The orem actually remains true even without The linear independence assumption.

Read also  China Airlines Flight 611 Engineering Essay

Suppose x¢ solves The linear programming problem Δ. Pr of it maximization is equivalent to The vector equality 1liiic€½¢€½ƒ¥iipa¢. It is The refore sufficient to show that The re is a p¢ ³ 0 such that ()liCxc¢ƒŽ€½ƒ¥ asince we can set pi¢ = 0 for ()iCx¢ƒ. Let us relabel The names of The resources so that C(x¢) = {1, …, m}. It is easy to see that The re can be no vector z such that

c-z = 1 and ai-z = -1 for all i ƒŽ C(x¢). For if The re were, The n for small enough ε > 0, x¢ + εz would be feasible and would lead to a higher value of The objective function than would

c-x¢. (For small enough ε The slack constraints remain slack at x¢ + εz.) It follows that The vectors c, a1, …, am are linearly dependent. ( if The y were linearly independent The n for any

(m+1)-vector d The re would ex is t a z such that c-z = d0, a1-z = d1, …, am-z = dm.) Hence it is not The case that, for all (Ï€0, Ï€1, …, Ï€m), Ï€0c + Ï€1a1 + … + Ï€mam = 0 implies (Ï€0, Ï€1, …, Ï€m) =

0. In o The r words, The re does ex is t some Ï€ = (Ï€0, Ï€1, …, Ï€m) ¹ 0, such that Ï€0c = Ï€1a1 + … +

1. Ï€mam. Since by assumption a1, …, am are linearly independent (LID), Ï€0 ¹ 0. ( if Ï€0 =

2. 0 The n Ï€1a1 + … + Ï€m am = 0. But The n LID requires that (Ï€1, …, Ï€m) = 0 and hence Ï€ =

3. 0.) So we may set each pi¢ equal to πi/π0. Hence we have as desired. ()liiiCxc¢ƒŽ¢€½ƒ¥

To fin is h we need only show, for any resource k, that pk¢ ³ 0. By The LID assumption, for each k we may find a z such that ak-z = -1 and aj-z = 0 for all j ƒŽ C(x¢), j ¹ k. Now multiply each side of c = pi¢ai by th is z: “iƒŽC(x¢)

c-z = (pi¢ai)-z = pk¢(-1). “iƒŽC(x¢)

So if pk¢ < 0 The n c-z > 0. But The n for ε > 0 sufficiently small, x¢ + εz would be feasible and deliver a larger value for The objective function than x¢, which contradicts The fact that x¢ solves Δ.

Th is solves above all statements that are given to prove……..

REFERRENCE: –

1. www.ma The matics-dictionary.com

2. Engineering ma The matics written by B.S Grewal.

3. Encarta Encyclopedia

4. www.google.com

5. www.mathqueries.com

6. R.D Sharma reference book of class 12th

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)