You can think of tensor as a generalization of a vector. It is quite important in all areas of physics, ranging from topics such as quantum field theory (to describe physics in very small scales) and general relativity (to describe physics in very large scales).

  • Scalar as an quantity that remained invariant under any type of transformation. For instance, rotations of the coordinate systems. Scalars are rank 0 tensors.
  • Vectors as an quantity that have a number of real components equal to the dimension of the coordinate system. Vectors are rank 1 tensors.

You can identify the tensor of rank $n$ in a $d$-dimensional space as an object with the following properties:

  • Components labeled by $n$ indices, with each index assigned values from 1 through d, and has $d^{n}$ components
  • Components transform in a specified manner under coordinated transformation

We will be using cartesian systems to understand the essential ideas.


Covariant and Contravariant Tensors

  • A 3D vector $\vec{A} = A_{1} \hat{e}_{1} + A_{2} \hat{e}_{2} + A_{3} \hat{e}_{3}$ in cartesian system defined by $\hat{e}_{i} (i = 1, 2, 3)$ (called basis vectors, or simply basis)
  • Transform into $\vec{A}^{\prime} = A^{\prime}_{1} \hat{e}^{\prime} _{1} + A^{\prime} _{2} \hat{e}^{\prime} _{2} + A^{\prime} _{3}\hat{e}^{\prime} _{3} $
  • Components of $\vec{A}$ and $\vec{A'}$ are related by $\vec{A}^{\prime}_{i} = \sum_{j} (\hat{e}^{\prime} _{i}.\hat{e}_{j})A_{j}=$ in Einstein convention $= (\hat{e}^{\prime} _{i}.\hat{e}_{j})A_{j}$ where the coefficients $(\hat{e}^{\prime} _{i}.\hat{e}_{j})$ are the projections of $(\hat{e}^{\prime} _{i}$ in the $\hat{e}_{j})$ directions

  • In Einstein convention, we omit the summation sign. This is must convenient way of writing rather than placing summation especially if you need to write more than one summation.
  • $\hat{e}^{\prime}_{i}$ and $\hat{e}_{j}$ are linearly related, that means we can also write

$$ \begin{equation} \vec{A}^{\prime} _{i} = \sum_{j} \frac{\partial x^{\prime} _{j}}{\partial x_{j}} A_{j} \end{equation} $$

  • Quantities transforming according to equation (1) are called contravariant vectors
  • Gradient of a scalar $\phi$ has in the unrotated cartesian coordinates, the components transforms as $(\nabla \phi)_{j} = (\frac{\partial \phi}{\partial x_{j}}) \hat{e}_{j}$

  • In rotated system, gradient of a scalar would have

$$ \begin{equation} (\nabla \phi)'_{i} \equiv \frac{\partial \phi}{\partial x'_{i}} = \sum_{j} \frac{\partial x_{j}}{\partial x'_{i}} \frac{\partial \phi}{\partial x_{j}} \end{equation} $$

  • Quantities transforming according to equation (2) are called covariant vectors (or also called one form in differential forms language)
  • Convention: To distinguish the above transformation rules, write the index of a contravariant vector as a superscript (for eg: $\vec{r} = (x^{1}, x^{2}, x^{3})$), and that of covariant vector as a subscript (for eg: $\vec{s} = (x_{1}, x_{2}, x_{3})$)
  • Summarizing:

    $$ \begin{align*} (A')^{i} &= \sum_{j} \frac{\partial (x')^{i}}{\partial x^{j}} A^{j}, \quad \vec{A}, \text{a contravariant vector},\\ A'_{i} &= \sum_{j} \frac{\partial x^{j}}{\partial(x')^{i}} A_{j}, \quad \vec{A}, \text{a covariant vector} \end{align*} $$

  • Notice the place of indices $i$ and $j$ in above rules.
  • Unsummed index $i$ is called free index

Rank 2 Tensor: an extension of vector

  • Coordinate transformation rules for contravariant, mixed, and covariant tensors of rank 2:

    $$ \begin{align*} (A')^{ij} &= \sum_{kl} \frac{\partial (x')^{i}}{\partial x^{k}} \frac{\partial (x')^{j}}{\partial x^{l}} A^{kl}, \quad \text{contravariant tensor}\\ (B')^{i}_{\hspace{0.1cm}j} &= \sum_{kl} \frac{\partial (x')^{i}}{\partial x^{k}} \frac{\partial x^{l}}{\partial (x')^{j}} B^{k}_{\hspace{0.1cm}l}, \quad \text{mixed tensor}\\ (C')_{ij} &= \sum_{kl} \frac{\partial x^{k}}{\partial (x')^{i}} \frac{\partial x^{l}}{\partial (x')^{j}} C_{kl}, \quad \text{covariant tensor} \end{align*} $$

  • Rank 2 tensor is basically a matrix.

  • Rank goes as the number of partial derivatives (or direction cosines). For example: 0 for a scalar, 1 for a vector, 2 for a second rank tensor, and so on.

  • Each index ranges over the number of dimensions of the space

  • $A^{kl}$ is contravariant with respect to both indices

  • $(B)^{k}_{\hspace{0.1cm}l}$ is contravariant with respect to the upper index $k$ but covariant to lower index $l$

  • Notice the space in the lower index (or it can be in upper index) of $(B)^{k}_{\hspace{0.1cm}l}$. It helps us to differentiate the row and column of a mixed tensor. It important during raising or lowering or contraction of a tensor which you will see in upcoming section.

  • $C_{kl}$ is covariant in both of the indices

  • But if we are using cartesian coordinates, all three forms of the tensors of rank second are the same

  • Physics should be independent to the choice of coordinate system (or also called reference frame). This makes physical laws universal.

  • Second rank tensor $A$ with components $A^{kl}$ can be represented in a two dimensional array, for instance in 3 dimensional space:

    $$ A = \begin{pmatrix} A^{11} & A^{12} & A^{13} \\ A^{21} & A^{22} & A^{23} \\ A^{31} & A^{32} & A^{33} \end{pmatrix} $$

  • This does not mean that any square array of number or functions forms a tensor. It has to follow the transformation rules mentioned above.

  • Above transformation rules can be viewed as a matrix equation.

  • For $A$, it takes the form:

    $$ \begin{align*} (A')^{ij} = \sum_{kl} S_{ik} A^{kl} (S^{T})_{lj} \end{align*} $$

    where $S_{ik} := \frac{\partial (x')^{i}}{\partial x^{k}}$ and $(S^{T})_{lj} := \frac{\partial (x')^{j}}{\partial x^{l}}$. In matrix form,

    $$ \begin{align*} A' = SAS^{T} \end{align*} $$

    where $S^{T}$ is transpose of $S$. This is known as similarity transformation.


  • Tensors are systems of components organized by one or more indices that transform according to specific rules under a set of tranformations.
  • Number of indices is called rank of the tensor


  • In tensor, order of the indices are important. i.e. In general, $A^{mn}$ is independent of $A^{nm}$
  • Special case is symmetric tensor $\forall m, n$, the components $A^{mn} = A^{nm}$. So, $A$ is called symmetric.
  • Another type is anti-symmetric tensor, $A^{mn} = - A^{nm}$. So, $A$ is called anti-symmetric.
  • Every (second-rank) tensor $B$ can be formed into symmetric and anti-symmetric parts by the identity:

    $$ \begin{align*} B^{mn} &= \frac{1}{2} (B^{mn} + B^{nm}) + \frac{1}{2} (B^{mn} - B^{nm}) \\ &= B_{\text{symmetric}} + B_{\text{anti-symmetric}} \end{align*} $$

  • The product of symmetric and anti-symmetric matrix is zero. i.e.

    $$ \begin{align*} B_{\text{symmetric}} . B_{\text{anti-symmetric}} &= \frac{1}{2} (B^{mn} + B^{nm}) \frac{1}{2} (B^{mn} - B^{nm}) \\ &= \frac{1}{4} (B^{mn} + B^{nm})(B^{mn} - B^{nm})\\ &= \text{Using } a^{2} - b^{2} = (a + b)(a - b)\\ &= \frac{1}{4} ((B^{mn})^{2} - (B^{nm})^{2})\\ &= \text{Since, } (B^{mn})^{2} = (B^{nm})^{2}\\ &= 0 \quad \text{Q.E.D.} \end{align*} $$

Isotropic tensor

  • Kronecker delta $\delta_{\delta_{kl}}$ is really a mixed tensor of rank 2, i.e. $\delta^{k}_{\hspace{0.1cm}l}$
  • If this a tensor then, it should transform according to our transformation rule. And it does! i.e.

    $$ \begin{align*} (\delta')^{i}_{\hspace{0.1cm}j} &= \frac{\partial (x')^{i}}{\partial x^{k}} \frac{\partial x^{l}}{\partial (x')^{j}} \delta^{k}_{\hspace{0.1cm}l} \\ &= \text{Using the definition of } \delta^{k}_{\hspace{0.1cm}l}\\ &= \frac{\partial (x')^{i}}{\partial x^{k}} \frac{\partial x^{k}}{\partial (x')^{j}}\\ \therefore (\delta')^{i}_{\hspace{0.1cm}j} &= \frac{\partial (x')^{i}}{\partial (x')^{j}} \quad \text{Q.E.D.} \end{align*} $$

    Since, $(x')^{i}$ and $(x')^{j}$ are independent coordinates. So, we retrieved the definition of our transformed Kronecker delta.
  • Above result is independent of the number of dimensions of our space.
  • Kronecker delta has the same coefficient in all our rotated coordinate systems, so it’s called as isotropic.
  • There exists third-rank and three fourth-rank isotropic tensors.
  • But, no isotropic first-rank tensor (vector) exists.

Spinor: an extension of tensor

  • In elementary particle physics, particles are distinguish by its spin which is an intrinsic angular momentum. For this, physicist uses spinors to describe it.
  • Spinor is an extension of tensor.
  • This topic is for next note. Yet to be finished :)


Addition and Subtraction

  • Addition is defined as $A + B = C$
  • For example: in contravariant tensors of rank 2: $A^{ij} + B^{ij} = C^{ij}$
  • Subtraction is defined as $D - E = F$
  • All the tensors under this operation should have same rank (of both contra- and co-variance) and in the same space


  • In vector analysis, we use scalar product by summing products of corresponding components:

    $$ \begin{align*} \vec{A}.\vec{B} = \sum_{i} A_{i} B_{i} \end{align*} $$

  • Generalization of above expression in tensor analysis is called contraction.

  • IDEA: Two indices (one covariant and the other contravariant) are set to equal to each other, then we sum over this repeated index.

  • Let us contract the second rank mixed tensor $A^{i}_{\hspace{0.1cm}j}$ by setting $j$ to $i$, then summing over $i$. Let’s look via its transformation rule:

    $$ \begin{align*} (A')^{i}_{\hspace{0.1cm}j} &= \sum_{kl} \frac{\partial (x')^{i}}{\partial x^{k}} \frac{\partial x^{l}}{\partial (x')^{j}} A^{k}_{\hspace{0.1cm}l}\\ &= \text{set j = i}\\ \text{or, } (A')^{i}_{\hspace{0.1cm}j} &= \sum_{kl} \frac{\partial (x')^{i}}{\partial x^{k}} \frac{\partial x^{l}}{\partial (x')^{i}} A^{k}_{\hspace{0.1cm}l}\\ &= \sum_{kl} \frac{\partial x^{l}}{\partial x^{k}} A^{k}_{\hspace{0.1cm}l}\\ &= \text{Since, } x^{l} \text{ and } x^{k} \text{ are independent coordinates}\\ &= \sum_{kl} \delta^{l}_{\hspace{0.1cm}k} A^{k}_{\hspace{0.1cm}l}\\ \therefore (A')^{i}_{\hspace{0.1cm}j} &= A^{k}_{\hspace{0.1cm}k} \end{align*} $$

    Note that repeated index ($i$ or $k$) is summed.

  • Contracted $\bold{A}$ is invariant under transformation, so it's scalar. In matrix analysis, this scalar is the trace of the matrix whose elements are the $A^{i}_{\hspace{0.1cm}j}$. i.e. $\text{Tr}(\bold{A}) = A^{k}_{\hspace{0.1cm}k}$.

  • Note that when I write $\bold{A}$, this means tensor with arbitrary rank.

  • Contraction reduces the rank of a tensor by 2.

Direct Product

  • IDEA: Components of two tensors (of any ranks and covariant/contravariant) can be multiplied, component by component, to make an quantity with all indices of both factors.
  • The new quantity is a direct product of the two tensors.
  • It’s a tensor whose rank is the sum of the ranks of the factors. For example:

    $$ \begin{align*} C^{ij}_{\hspace{0.2cm}klm} &= A^{i}_{\hspace{0.1cm}k} B^{j}_{\hspace{0.1cm}lm}\\ F^{ij}_{\hspace{0.2cm}kl} &= A^{j} B^{i}_{\hspace{0.1cm}lk} \end{align*} $$

  • Index order in the direct product can be defined as desired.
  • The contravariant/covariant of the factors must be maintained in the direct product.
  • It helps us to create higher rank tensors.

Inverse Transformation

  • Given a contravariant vector $A^{i}$ with transformation rule:

    $$ \begin{align*} (A')^{j} = \frac{\partial (x')^{j}}{\partial x^{i}} A^{i} \end{align*} $$

    Then, its inverse transformation is simply by interchanging the roles of the primed and unprimed quantities. i.e.

    $$ \begin{align*} A^{i} = \frac{\partial x^{i}}{\partial (x')^{j}} (A')^{j} \end{align*} $$

  • We can verify it by acting $\frac{\partial (x')^{k}}{\partial x^{i}}$ to $A^{i}$ summing over $i$ i.e.

    $$ \begin{align*} \frac{\partial (x')^{k}}{\partial x^{i}} A^{i} &= \frac{\partial (x')^{k}}{\partial x^{i}} \frac{\partial x^{i}}{\partial (x')^{j}} (A')^{j}\\ &= \delta^{k}_{\hspace{0.1cm}j} (A')^{j}\\ \therefore \frac{\partial (x')^{k}}{\partial x^{i}} A^{i} &= (A')^{k} \quad \text{Q.E.D.} \end{align*} $$

  • Note that below is not in-general true:

    $$ \begin{align*} \frac{\partial x^{i}}{\partial (x')^{j}} \neq \left[ \frac{\partial (x')^{j}}{\partial x^{i}} \right]^{-1} \end{align*} $$

  • In cartesian system,

    $$ \begin{align*} \frac{\partial x^{i}}{\partial (x')^{j}} = \frac{\partial (x')^{j}}{\partial x^{i}} \end{align*} $$

    Because both equal to the direction cosine connecting the $x^{i}$ and $(x')^{j}$ axes. This equality does not extend to non-Cartesian systems.

Quotient Rule

  • We now know that given $A_{ij}$ and $B_{kl}$ are tensors (meaning they follow specific coordinate transformation rule), we know their direct product produces a tensor.
  • Now, how to have an inverse of such direct products? We need quotient rule. For example, we have below equations:

    $$ \begin{align*} K_{i} A^{i} &= B\\ K^{j}_{\hspace{0.1cm}i} A_{j} &= B_{i}\\ K^{j}_{\hspace{0.1cm}i} A_{jk} &= B_{ik}\\ K_{ijkl} A^{ij} &= B_{kl}\\ K^{ij} A^{k} &= B^{ijk} \end{align*} $$

  • If $\bold{A}$ and $\bold{B}$ are known tensors, but $\bold{K}$ is an unknown quantity. We need to establish the transformation properties of $\bold{K}$. Let’s see how far we can go.
  • Quotient rule asserts If the equation of interest holds in all transformed coordinate systems, then, $\bold{K}$ is a tensor of the indicated rank and covariant/contravariant.
  • Let's prove the quotient rule for $K^{j}_{\hspace{0.1cm}i} A_{j} = B_{i}$, we have the transformation rule as:

    $$ \begin{equation} K^{j}_{\hspace{0.1cm}i} A_{j} = B_{i} \quad\to\quad (K')^{j}_{\hspace{0.1cm}i} (A')_{j} = (B')_{i}. \end{equation} $$ We now evaluate $(B')^{i}$ as:

    $$ \begin{align*} (B')_{i} &= \frac{\partial x^{m}}{\partial (x')^{i}} B_{m}\\ &= \frac{\partial x^{m}}{\partial (x')^{i}} K^{j}_{\hspace{0.1cm}m} A_{j} \\ &= \frac{\partial x^{m}}{\partial (x')^{i}} K^{j}_{\hspace{0.1cm}m} \frac{\partial (x')^{n}}{\partial x^{j}} (A')_{n} \end{align*} $$

    We have two dummy indices $n$ and $j$, and we interchanges these indices. i.e. $$ \begin{equation} (B')_{i} = \frac{\partial x^{m}}{\partial (x')^{i}} \frac{\partial (x')^{j}}{\partial x^{n}} K^{n}_{\hspace{0.1cm}m} (A')_{j} \end{equation} $$ Now, we subtract equation (3) by (4). i.e.

    $$ \begin{align*} \left[ (K')^{j}_{\hspace{0.1cm}i} - \frac{\partial x^{m}}{\partial (x')^{i}} \frac{\partial (x')^{j}}{\partial x^{n}} K^{n}_{\hspace{0.1cm}m} \right] (A')_{j} = 0. \end{align*} $$

    This suggests the transformation rule for $K^{n}_{\hspace{0.1cm}m}$. i.e.

    $$ \begin{align*} (K')^{j}_{\hspace{0.1cm}i} = \frac{\partial x^{m}}{\partial (x')^{i}} \frac{\partial (x')^{j}}{\partial x^{n}} K^{n}_{\hspace{0.1cm}m} \end{align*} $$

  • Similarly, we can find the transformation rules for other equations above.
  • Note that quotient rule does not apply if $\bold{B}$ is zero. Because transformation properties of zero are indeterminate.

Permalink at https://www.physicslog.com/maths-notes/tensor

Published on Jul 1, 2021

Last revised on Jul 8, 2021