Matrices, Arrays and Tensors

How do we go from tensor expressions in Tensorial to matrix or array operations in Mathematica?

David Park, djmp@earthlink.net

Initialization

In[4]:=

Needs["TensorCalculus4`Tensorial`"]

In[5]:=

DefineTensorShortcuts[{{e, f, u, v}, 1}, {{R, S, T, Λ, X}, 2}, {{R, T, ε, X}, 3}]

DeclareIndexFlavor[{red, Red}]

Introduction

There are two ways of calculating with tensor expressions and equations. One is to use the indexed tensors and carry out Einstein summations and array expansions. A second way is to convert the equations to array expressions and then evaluate with Mathematica's efficient array manipulation routines. We will refer to the first method as index mode and the second as dot mode. This notebook shows how Tensorial can be used to bridge the gap between these two forms of calculation.

Index mode is standard tensor algebra. It is most convenient for derivations and will be more than adequate for most calculations. An indexed mode expression always gives us context. It tells us how the various tensors interact and work with each other. Some users may prefer to think in terms of matrices and vectors and would like to work in dot mode, or use it for didactic purposes. If there are many contractions in a calculation it will often be more efficient to use dot mode calculations.

Working entirely with arrays can be confusing because they lose their context. There are many ways to do the calculations and also many opportunities for missteps. This is especially so when we are dealing with 3rd or 4th order arrays and when there are contractions within a single array. It is a matter of keeping the indices and levels and the order of multiplication straight. By starting with a contextual tensor expression, converting to a dot mode expression and then generating the arrays we can keep things straight and we always have the indexed expression to guide us. If we try to skip the tensor notation and work directly with arrays, we are always liable to confusion whether we are to use a matrix, its transpose or its inverse, and on which side we are to place it.

The routines for going from index mode expressions to dot mode expressions are in the Arrays section of the Tensorial Help. They are DotTensorFactors, ExpandDotArrays, DotOperate and ContractArray.

Some Mathematica Pointers

The Prime Rule for Products of 'Tensor' Arrays in Mathematica:
    
S.T dots the lowest level of S with the highest level of T,
or equivalently
    S.T dots the last level of S with the first level of T.

The Mathematica Transpose[T,{n1,n2,n3,...}] moves levels {1,2,3,...} to levels {n1,n2,n3,...}. We will always want to move the contracted level to the first or last level when doing Dot products and to the first two levels when doing single array contractions.

If R, S, T,... are Mathematica tensor arrays, then their direct product is given by Outer[Times,R,S,T,...]. This will produce a single Mathematica array. The levels are in the same order as the levels in the successive arrays.
When expanding tensors to be put in Outer it is best to keep the indicies in strictly ascending sort order with the slots, or at least within each tensor.

The basic Mathematica command for contraction of the top two levels in a single array T is Tr[T,Plus,2]. We will have to use Transpose on T to put the contraction slots in the first two levels. We will have to repeat the operation if we want to do multiple contractions.

Some Tensorial Pointers

The prime rule for expansion of indexed expressions is that it is done according to the sort order of the raw free indices. It is not done according to the slot order, which would even be ambiguous if the expression involved several tensors. The lowest sort order index goes at the highest level.

This can be a source of error in Tensorial calculations if we inadvertently end up calculating a transpose of an array we wanted because we didn't pay attention to the free indices.

When setting tensor values you should always use a pattern tensor in which the index sort order matches the slot order - unless you deliberately want to transpose the data array.

1) Mathematica arrays contain no information as to which indices are up and which are down. We must interpret this 'on the side'.
2) Mathematica array transpositions must sometimes be interpreted as switching the slots along with their up/down configurations and sometimes as simply switching the indices between slots.
3) Which intepretation is used depends on what makes sense. There does not appear to be a fixed rule.

Dot Mode Calculations

1. Introduction

In dot mode calculations we replace tensor expressions with Dot products of arrays, matrices and vectors. Regular tensor expressions are indifferent to the order of the factors in the terms. That's why we can represent the terms with an unordered Times expression. Of course, the fact that Times is not Ordered means that Mathematica is free to impose its own order, which is the natural sort order of the factors. This can be disconcerting because if we think in terms of array operations the factors may not be in our preferred order.

A solution to this is to convert a tensor term to a Dot product using the routine DotTensorFactors. Dot is Ordered and Mathematica will not change the order. That way we can make factors line up in a manner that corresponds to normal matrix equations. Doesn't this imply a contradiction? In dot mode we impose a specific order, but normal tensor expressions do not require it. It is not a contradiction because, in fact, we can use any order we wish in the dot product. We simply have to perform the correct transpositions of the arrays before carrying out the dot products. The index mode expressions automatically do this for us. This is one of the great advantages of index mode.

In dot mode we have to corollate the array levels with the indices used in the index mode expressions and our choosen dot mode order of factors. This is not too difficult once you see the scheme, but on the other hand it is just difficult enough that one will seldom see anything but the simplest dot mode expressions in textbooks or papers.

There are many ways that a dot mode expression can be set up and evaluated. This might be another source of confusion. It is probably not worth trying to define a 'canonical form' and any such canonical form might not be the shortest path from a given starting expression. The importent thing is to always associate the array levels with the proper indices. We always have the original dot mode equation with indexed tensors to guide us.

2. Change of Basis Example

Let's define a transformation from unflavored basis vectors e to red flavored basis vectors e by a Λ matrix. It is customary to write such matrices in the ud form and, of course, they will have mixed flavor indices. To write the red basis vectors in terms of the unflavored basis vectors we only have to line up the indices and flavors.

In[7]:=

ed[red @ j] == Λud[i, red @ j] ed[i]

Out[7]=

e_j^j == e_i^i Λ_ (ij)^(ij)

Here is a particular transformation.

In[8]:=

mat = ({{1, 3, -1}, {0, 2, 1}, {2, 0, 1}}) ;

SetTensorValueRules[Λud[i, red @ j], mat]

SetTensorValueRules[Λud[red @ i, j], Inverse[mat]]

We can easily expand the equations in regular Tensorial index mode.

In[11]:=

ed[red @ j] == Λud[i, red @ j] ed[i]

%//ToArrayValues[]//TableForm

Out[11]=

e_j^j == e_i^i Λ_ (ij)^(ij)

Out[12]//TableForm=

e_1^1 == e_1^1 + 2 e_3^3
e_2^2 == 3 e_1^1 + 2 e_2^2
e_3^3 == -e_1^1 + e_2^2 + e_3^3

The following evaluates the equation in dot mode.

In[13]:=

Print["Red basis vectors as linear combinations of the black basis vectors"]

ed[red @ j] == Λud[i, red @ j] ed[i]

Print["Converting to dot mode"]

MapAt[DotTensorFactors[{1, 2}], %%, 2]

Print["Substituting array values"]

%%//ExpandDotArray[Tensor[___]]

Print["Evaluating"]

MapAt[DotOperate[1, Dot], %%, 2]

Red basis vectors as linear combinations of the black basis vectors

Out[14]=

e_j^j == e_i^i Λ_ (ij)^(ij)

Converting to dot mode

Out[16]=

e_j^j == e_i^i . Λ_ (ij)^(ij)

Substituting array values

Out[18]=

( {{e_1^1}, {e_2^2}, {e_3^3}} ) == ( {{e_1^1}, {e_2^2}, {e_3^3}} ) . ( {{1, 3, -1}, {0, 2, 1}, {2, 0, 1}} )

Evaluating

Out[20]=

( {{e_1^1}, {e_2^2}, {e_3^3}} ) == ( {{e_1^1 + 2 e_3^3}, {3 e_1^1 + 2 e_2^2}, {-e_1^1 + e_2^2 + e_3^3}} )

Remember that Mathematica contracts the lowest level of e (there is only one level) with the highest level of Λ. Since i is the lowest sort order raw index, it does correspond to the highest level in the Λ array. Mathematica knows how to do this array multiplication even though e looks like a 'column vector'. We might say that this dot mode expression is in 'canonical form' since the dummy indices, i, are adjacent in the down/up orientation and i is the lowest sort order index in Λ.

On the otherhand, the transformation of components is given by the following index mode equation that involves the inverse of the original Λ matrix. Again, all we have to do is line up the positions and flavors of the indices.

In[21]:=

vu[red @ j] == Λud[red @ j, i] vu[i]

%//ToArrayValues[]//TableForm

Out[21]=

v_j^j == v_i^i Λ_ (ji)^(ji)

Out[22]//TableForm=

v_1^1 == v_1^1/6 - v_2^2/4 + (5 v_3^3)/12
v_2^2 == v_1^1/6 + v_2^2/4 - v_3^3/12
v_3^3 == -v_1^1/3 + v_2^2/2 + v_3^3/6

The following evaluates the above equation in the dot mode. But now, even though we line up the dummy index, i, in the down/up configuration, i will not be at the lowest level, but at the highest level of the expanded array because i comes before j in sort order. Therefore when the dot operation is performed Λ must be transposed to put i at the lowest level.

In[23]:=

Print["Red vector components as linear combinations of the black components"]

vu[red @ j] == Λud[red @ j, i] vu[i]

Print["Converting to dot mode"]

MapAt[DotTensorFactors[{2, 1}], %%, 2]

Print["Substituting array values"]

%%//ExpandDotArray[Tensor[___]]

Print["But now i was not the lowest sort order in the Λ matrix. We must dot with the transpose."]

MapAt[DotOperate[1, Function[{A, B}, Transpose[A] . B]], %%, 2]

Red vector components as linear combinations of the black components

Out[24]=

v_j^j == v_i^i Λ_ (ji)^(ji)

Converting to dot mode

Out[26]=

v_j^j == Λ_ (ji)^(ji) . v_i^i

Substituting array values

Out[28]=

( {{v_1^1}, {v_2^2}, {v_3^3}} ) == ( {{1/6, 1/6, -1/3}, {-1/4, 1/4, 1/2}, {5/12, -1/12, 1/6}} ) . ( {{v_1^1}, {v_2^2}, {v_3^3}} )

But now i was not the lowest sort order in the Λ matrix. We must dot with the transpose.

Out[30]=

( {{v_1^1}, {v_2^2}, {v_3^3}} ) == ( {{v_1^1/6 - v_2^2/4 + (5 v_3^3)/12}, {v_1^1/6 + v_2^2/4 - v_3^3/12}, {-v_1^1/3 + v_2^2/2 + v_3^3/6}} )

This means that the way I am using Dot expressions is a little ambiguous because transpositions of levels are allowed in the actual operation. If we want a canonical form of the dot mode equation we can transpose the Λ matrix at the time we expand it.

In[31]:=

Print["Red vector components as linear combinations of the black components"]

vu[red @ j] == Λud[red @ j, i] vu[i]

Print["Converting to dot mode"]

MapAt[DotTensorFactors[{2, 1}], %%, 2]

Print["Substituting array values and transposing Λ to obtain a canonical dot product"]

%%//ExpandDotArray[vu[_]]//ExpandDotArray[Tensor[Λ, __], {2, 1}]

Print["Now we do a regular dot product without any additional transpositions of levels"]

MapAt[DotOperate[1, Dot], %%, 2]

Red vector components as linear combinations of the black components

Out[32]=

v_j^j == v_i^i Λ_ (ji)^(ji)

Converting to dot mode

Out[34]=

v_j^j == Λ_ (ji)^(ji) . v_i^i

Substituting array values and transposing Λ to obtain a canonical dot product

Out[36]=

( {{v_1^1}, {v_2^2}, {v_3^3}} ) == ( {{1/6, -1/4, 5/12}, {1/6, 1/4, -1/12}, {-1/3, 1/2, 1/6}} ) . ( {{v_1^1}, {v_2^2}, {v_3^3}} )

Now we do a regular dot product without any additional transpositions of levels

Out[38]=

( {{v_1^1}, {v_2^2}, {v_3^3}} ) == ( {{v_1^1/6 - v_2^2/4 + (5 v_3^3)/12}, {v_1^1/6 + v_2^2/4 - v_3^3/12}, {-v_1^1/3 + v_2^2/2 + v_3^3/6}} )

If we had been more felicitous in our choice of indexing we wouldn't have had to use any level transpositions at all.

In[39]:=

Print["Red vector components as linear combinations of the black components with better indexing"]

vu[red @ i] == Λud[red @ i, j] vu[j]

Print["Converting to dot mode. The dummy indicies are adjacent and at the correct levels."]

MapAt[DotTensorFactors[{2, 1}], %%, 2]

Print["Straight substitution of array values"]

%%//ExpandDotArray[Tensor[___]]

Print["Evaluation without transposition of levels"]

MapAt[DotOperate[1, Dot], %%, 2]

Red vector components as linear combinations of the black components with better indexing

Out[40]=

v_i^i == v_j^j Λ_ (ij)^(ij)

Converting to dot mode. The dummy indicies are adjacent and at the correct levels.

Out[42]=

v_i^i == Λ_ (ij)^(ij) . v_j^j

Straight substitution of array values

Out[44]=

( {{v_1^1}, {v_2^2}, {v_3^3}} ) == ( {{1/6, -1/4, 5/12}, {1/6, 1/4, -1/12}, {-1/3, 1/2, 1/6}} ) . ( {{v_1^1}, {v_2^2}, {v_3^3}} )

Evaluation without transposition of levels

Out[46]=

( {{v_1^1}, {v_2^2}, {v_3^3}} ) == ( {{v_1^1/6 - v_2^2/4 + (5 v_3^3)/12}, {v_1^1/6 + v_2^2/4 - v_3^3/12}, {-v_1^1/3 + v_2^2/2 + v_3^3/6}} )

It is clear that there are many routes to evaluating a tensor equation in the dot mode. We only have to make certain that the indices and levels correspond to the levels in the arrays. We always have the indexed form of the dot mode equation to guide us in the evaluation.

We might think that if we properly choose our indexing we will always be able to line up the arrays and never have to do any transpositions of levels. This, however, is not always the case as some of the next examples will show. We must keep our wits about us and remain level headed.

3. Cross Product in Cartesian 3D

The following is the cross product of two vectors, u and v, in 3D Cartesian space.

In[47]:=

u×v

%/.{u→uu[i], v→vu[j]}

ToArrayValues[]/@%//MatrixForm

Out[47]=

u×v

Out[48]=

u_i^i×v_j^j

Out[49]//MatrixForm=

( {{-u_3^3 v_2^2 + u_2^2 v_3^3}, {u_3^3 v_1^1 - u_1^1 v_3^3}, {-u_2^2 v_1^1 + u_1^1 v_2^2}} )

We can calculate this with tensor calculus using the Levi-Civita completely antisymmetric tensor ε.

In[50]:=

SetTensorValueRules[εddd[i, j, k], PermutationPseudotensor[3]]

In[51]:=

uu[i] vu[j] εddd[i, j, k]

%//ToArrayValues[]//MatrixForm

Out[51]=

u_i^i v_j^j ε_ (ijk)^(ijk)

Out[52]//MatrixForm=

( {{-u_3^3 v_2^2 + u_2^2 v_3^3}, {u_3^3 v_1^1 - u_1^1 v_3^3}, {-u_2^2 v_1^1 + u_1^1 v_2^2}} )

As usual, there are a number of ways to calculate this in dot mode. The following is one method.

In[53]:=

Print["Components of cross product in Cartesian 3D"]

uu[i] vu[j] εddd[i, j, k]

Print["Converting to dot mode. We must group u and v"]

%%//DotTensorFactors[{{1, 2}, 3}]

Print["Expanding the arrays we must put i at the second level in the first array."]

step1 = %%//ExpandDotArray[uu[_] vu[_], {2, 1}]//ExpandDotArray[Tensor[ε, __]]

Print["Evaluating, we must contract the intermediate array on the 1st and 2nd levels"]

%%//DotOperate[1, Function[{A, B}, ContractArray[A . B, {1, 2}]]]

Components of cross product in Cartesian 3D

Out[54]=

u_i^i v_j^j ε_ (ijk)^(ijk)

Converting to dot mode. We must group u and v

Out[56]=

(u_i^i v_j^j) . ε_ (ijk)^(ijk)

Expanding the arrays we must put i at the second level in the first array.

Out[58]=

Evaluating, we must contract the intermediate array on the 1st and 2nd levels

Out[60]//MatrixForm=

( {{-u_3^3 v_2^2 + u_2^2 v_3^3}, {u_3^3 v_1^1 - u_1^1 v_3^3}, {-u_2^2 v_1^1 + u_1^1 v_2^2}} )

If we wish to see the extra contraction as a separate step we can pick up from the penultimate line above.

In[61]:=

Print["Picking up from the 3rd line of the previous calculation"]

step1

Print["Multiplying the two arrays without the extra contraction gives us an array ", Xudd[j, j, k]]

%%//DotOperate[1, Dot]

Print["Contracting levels 1 and 2"]

ContractArray[%%, {1, 2}]//MatrixForm

Picking up from the 3rd line of the previous calculation

Out[62]=

Multiplying the two arrays without the extra contraction gives us an array X_ (jjk)^(jjk)

Out[64]//MatrixForm=

Contracting levels 1 and 2

Out[66]//MatrixForm=

( {{-u_3^3 v_2^2 + u_2^2 v_3^3}, {u_3^3 v_1^1 - u_1^1 v_3^3}, {-u_2^2 v_1^1 + u_1^1 v_2^2}} )

(When MatrixForm is the head of an expression it is transparent to operations.) The following shows an alternative dot mode expansion of the same expression.

In[67]:=

Print["Components of cross product in Cartesian 3D"]

uu[i] vu[j] εddd[i, j, k]

Print["Converting to dot mode using 3 arguments"]

%%//DotTensorFactors[{1, 2, 3}]

Print["Expanding the arrays we put j at the first level in the ε array."]

%%//ExpandDotArray[uu[_] | vu[_]]//ExpandDotArray[Tensor[ε, __], {2, 1, 3}]

Print["We then evaluate the right most dot product giving an expression of the form ", uu[i] . Xdd[i, k]]

%%//DotOperate[2, Dot]

Print["Everything is lined up to evaluate the remaining dot"]

%%//DotOperate[1, Dot]

Components of cross product in Cartesian 3D

Out[68]=

u_i^i v_j^j ε_ (ijk)^(ijk)

Converting to dot mode using 3 arguments

Out[70]=

u_i^i . v_j^j . ε_ (ijk)^(ijk)

Expanding the arrays we put j at the first level in the ε array.

Out[72]=

We then evaluate the right most dot product giving an expression of the form u_i^i . X_ (ik)^(ik)

Out[74]=

( {{u_1^1}, {u_2^2}, {u_3^3}} ) . ( {{0, -v_3^3, v_2^2}, {v_3^3, 0, -v_1^1}, {-v_2^2, v_1^1, 0}} )

Everything is lined up to evaluate the remaining dot

Out[76]//MatrixForm=

( {{-u_3^3 v_2^2 + u_2^2 v_3^3}, {u_3^3 v_1^1 - u_1^1 v_3^3}, {-u_2^2 v_1^1 + u_1^1 v_2^2}} )

4. Order of the Final Free Indices

We have to keep an eye on the desired order of the final free indices. Consider the following expression.

In[77]:=

Rddu[a, b, c] == Sud[d, a] Tudd[c, b, d]

step1 = MatrixForm[ToArrayValues[][#]] &/@%

Out[77]=

R_ (abc)^(abc) == S_ (da)^(da) T_ (cbd)^(cbd)

Out[78]=

When we do this in the dot mode we must be certain that the levels of the final array correspond to the levels of the left hand side.

In[79]:=

Print["Index mode equation"]

Rddu[a, b, c] == Sud[d, a] Tudd[c, b, d]

Print["Converting to dot mode"]

MapAt[DotTensorFactors[{2, 1}], %%, 2]

Print["Since a comes before d we must transpose the S array. The T index levels are b c d."]

%%//ExpandDotArray[Tensor[R | T, __]]//ExpandDotArray[Tensor[S, __], {2, 1}]

Print["Multiplying the arrays gives us an array of the form ", Xudd[b, c, a]]

MapAt[DotOperate[1, Dot], %%, 2]

Print["We must transpose the levels to obtain a b c"]

step2 = MapAt[Transpose[#, {2, 3, 1}] &, %%, {2, 1}]

step2 === step1

Index mode equation

Out[80]=

R_ (abc)^(abc) == S_ (da)^(da) T_ (cbd)^(cbd)

Converting to dot mode

Out[82]=

R_ (abc)^(abc) == T_ (cbd)^(cbd) . S_ (da)^(da)

Since a comes before d we must transpose the S array. The T index levels are b c d.

Out[84]=

Multiplying the arrays gives us an array of the form X_ (bca)^(bca)

Out[86]=

We must transpose the levels to obtain a b c

Out[88]=

Out[89]=

True

The final transposition tests our knowledge of the Transpose command. We want {b,c,a} to go into positions {2,3,1} to give {a,b,c}.

More Practice with Array Calculations

The following provides some more practice in the array manipulation of tensors.

1. Contracting two vectors

The following is the Tensorial contraction of two vectors.

In[90]:=

e1 = uu[i] vd[i]

%//EinsteinSum[]

Out[90]=

u_i^i v_i^i

Out[91]=

u_1^1 v_1^1 + u_2^2 v_2^2 + u_3^3 v_3^3

This can be done as a dot mode calculation as follows...

In[92]:=

uu[i] vd[i]

%//DotTensorFactors[{1, 2}]

%//ExpandDotArray[Tensor[__]]

%//DotOperate[1, Dot]

Out[92]=

u_i^i v_i^i

Out[93]=

u_i^i . v_i^i

Out[94]=

( {{u_1^1}, {u_2^2}, {u_3^3}} ) . ( {{v_1^1}, {v_2^2}, {v_3^3}} )

Out[95]//MatrixForm=

u_1^1 v_1^1 + u_2^2 v_2^2 + u_3^3 v_3^3

We could also generate the outer product of u and v and then contract on the two levels. However, this is inefficient because it generates extra terms that are then thrown away.

In[96]:=

Outer[Times, uu[i]//ToArrayValues[], vd[j]//ToArrayValues[]]//MatrixForm

ContractArray[%, {1, 2}]

Out[96]//MatrixForm=

( {{u_1^1 v_1^1, u_1^1 v_2^2, u_1^1 v_3^3}, {u_2^2 v_1^1, u_2^2 v_2^2, u_2^2 v_3^3}, {u_3^3 v_1^1, u_3^3 v_2^2, u_3^3 v_3^3}} )

Out[97]=

u_1^1 v_1^1 + u_2^2 v_2^2 + u_3^3 v_3^3

In[98]:=

Outer[Times, uu[i]//ToArrayValues[], vd[j]//ToArrayValues[]]//MatrixForm

Tr[%]

Out[98]//MatrixForm=

( {{u_1^1 v_1^1, u_1^1 v_2^2, u_1^1 v_3^3}, {u_2^2 v_1^1, u_2^2 v_2^2, u_2^2 v_3^3}, {u_3^3 v_1^1, u_3^3 v_2^2, u_3^3 v_3^3}} )

Out[99]=

u_1^1 v_1^1 + u_2^2 v_2^2 + u_3^3 v_3^3

2. Contracting a vector on a 2nd order tensor

We can contract a vector on either slot of a 2nd order tensor. In tensor indicial expressions the factors can appear in any order. Here we contract u on the second index of T. Contracting a second order tensor on a vector gives us a vector.

In[100]:=

Tdd[i, j] uu[j]

%//ToArrayValues[]//MatrixForm

Out[100]=

T_ (ij)^(ij) u_j^j

Out[101]//MatrixForm=

The following uses a dot mode calculation.

In[102]:=

Tdd[i, j] uu[j]

%//DotTensorFactors[{1, 2}]

%//ExpandDotArray[Tensor[__]]

%//DotOperate[1, Dot]

Out[102]=

T_ (ij)^(ij) u_j^j

Out[103]=

T_ (ij)^(ij) . u_j^j

Out[104]=

( {{T_ (11)^(11), T_ (12)^(12), T_ (13)^(13)}, {T_ (21)^(21), T_ (22)^(22), T_ (23)^(23)}, {T_ (31)^(31), T_ (32)^(32), T_ (33)^(33)}} ) . ( {{u_1^1}, {u_2^2}, {u_3^3}} )

Out[105]//MatrixForm=

We can also form the tensor product of T and u with Outer and then contract using ContractArray on the 2nd and 3rd slots or levels. Notice again that many extra unused terms are generated.

In[106]:=

Outer[Times, Tdd[i, j]//ToArrayValues[], uu[k]//ToArrayValues[]]//MatrixForm

ContractArray[%, {2, 3}]//MatrixForm

Out[106]//MatrixForm=

Out[107]//MatrixForm=

If we contract the vector on the first index we obtain...

In[108]:=

Tdd[j, i] uu[j]

%//ToArrayValues[]//MatrixForm

Out[108]=

T_ (ji)^(ji) u_j^j

Out[109]//MatrixForm=

which is definitely different than contracting on the second slot. In the dot mode this becomes...

In[110]:=

Tdd[j, i] uu[j]

%//DotTensorFactors[{1, 2}]

%//ExpandDotArray[Tensor[__]]

%//DotOperate[1, Dot]

Out[110]=

T_ (ji)^(ji) u_j^j

Out[111]=

T_ (ji)^(ji) . u_j^j

Out[112]=

( {{T_ (11)^(11), T_ (21)^(21), T_ (31)^(31)}, {T_ (12)^(12), T_ (22)^(22), T_ (32)^(32)}, {T_ (13)^(13), T_ (23)^(23), T_ (33)^(33)}} ) . ( {{u_1^1}, {u_2^2}, {u_3^3}} )

Out[113]//MatrixForm=

Notice that we didn't have to do any transpositions of levels. Tensorial automatically made j the lower level in the T array because it comes after i in sort order.

In the following we take the outer product T⊗u and keep a strict slot order by using an ascending set of indices. Then we want to contract the resulting array on the first and third slots.

In[114]:=

Outer[Times, Tdd[i, j]//ToArrayValues[], uu[k]//ToArrayValues[]]//MatrixForm

ContractArray[%, {1, 3}]//MatrixForm

Out[114]//MatrixForm=

Out[115]//MatrixForm=

3. Contractions of two 2nd order tensors

We have four choices on how two second order tensors can be contracted. In addition, there are two ways to perform a full contraction to obtain a scalar.

Single contractions

The following are the four possible single contractions of S⊗T performed in index mode.

In[116]:=

Rdu[b, c] == Sdd[a, b] Tuu[a, c]

MatrixForm[ToArrayValues[][#]] &/@%

contract13 = Last[%] ;

Out[116]=

R_ (bc)^(bc) == S_ (ab)^(ab) T_ (ac)^(ac)

Out[117]=

In[119]:=

Rdu[b, c] == Sdd[a, b] Tuu[c, a]

MatrixForm[ToArrayValues[][#]] &/@%

contract14 = Last[%] ;

Out[119]=

R_ (bc)^(bc) == S_ (ab)^(ab) T_ (ca)^(ca)

Out[120]=

In[122]:=

Rdu[a, c] == Sdd[a, b] Tuu[b, c]

MatrixForm[ToArrayValues[][#]] &/@%

contract23 = Last[%] ;

Out[122]=

R_ (ac)^(ac) == S_ (ab)^(ab) T_ (bc)^(bc)

Out[123]=

In[125]:=

Rdu[a, c] == Sdd[a, b] Tuu[c, b]

MatrixForm[ToArrayValues[][#]] &/@%

contract24 = Last[%] ;

Out[125]=

R_ (ac)^(ac) == S_ (ab)^(ab) T_ (cb)^(cb)

Out[126]=

Calculating S⊗T...

In[128]:=

(ST = Outer[Times, Sdd[a, b]//ToArrayValues[], Tuu[c, d]//ToArrayValues[]])//MatrixForm

Out[128]//MatrixForm=

and performing the contractions on the various slots.

In[129]:=

ContractArray[ST, {1, 3}]//MatrixForm

MatrixForm[%] == contract13

Out[129]//MatrixForm=

Out[130]=

True

In[131]:=

ContractArray[ST, {1, 4}]//MatrixForm

MatrixForm[%] == contract14

Out[131]//MatrixForm=

Out[132]=

True

In[133]:=

ContractArray[ST, {2, 3}]//MatrixForm

MatrixForm[%] == contract23

Out[133]//MatrixForm=

Out[134]=

True

In[135]:=

ContractArray[ST, {2, 4}]//MatrixForm

MatrixForm[%] == contract24

Out[135]//MatrixForm=

Out[136]=

True

Calculating the same contractions in dot mode.

In[137]:=

Rdu[b, c] == Sdd[a, b] Tuu[a, c]

MapAt[DotTensorFactors[{1, 2}], %, 2]

%//ExpandDotArray[Tensor[T | R, __]]//ExpandDotArray[Tensor[S, __], {2, 1}]

MapAt[DotOperate[1, Dot], %, 2]

Last[%] == contract13

Out[137]=

R_ (bc)^(bc) == S_ (ab)^(ab) T_ (ac)^(ac)

Out[138]=

R_ (bc)^(bc) == S_ (ab)^(ab) . T_ (ac)^(ac)

Out[139]=

Out[140]=

Out[141]=

True

In[142]:=

Rdu[b, c] == Sdd[a, b] Tuu[c, a]

MapAt[DotTensorFactors[{1, 2}], %, 2]

%//ExpandDotArray[Tensor[R | T, __]]//ExpandDotArray[Tensor[S, __], {2, 1}]

MapAt[DotOperate[1, Dot], %, 2]

Last[%] == contract14

Out[142]=

R_ (bc)^(bc) == S_ (ab)^(ab) T_ (ca)^(ca)

Out[143]=

R_ (bc)^(bc) == S_ (ab)^(ab) . T_ (ca)^(ca)

Out[144]=

Out[145]=

Out[146]=

True

In[147]:=

Rdu[a, c] == Sdd[a, b] Tuu[b, c]

MapAt[DotTensorFactors[{1, 2}], %, 2]

%//ExpandDotArray[Tensor[__]]

MapAt[DotOperate[1, Dot], %, 2]

Last[%] == contract23

Out[147]=

R_ (ac)^(ac) == S_ (ab)^(ab) T_ (bc)^(bc)

Out[148]=

R_ (ac)^(ac) == S_ (ab)^(ab) . T_ (bc)^(bc)

Out[149]=

Out[150]=

Out[151]=

True

In[152]:=

Rdu[a, c] == Sdd[a, b] Tuu[c, b]

MapAt[DotTensorFactors[{1, 2}], %, 2]

%//ExpandDotArray[Tensor[__]]

MapAt[DotOperate[1, Dot], %, 2]

Last[%] == contract24

Out[152]=

R_ (ac)^(ac) == S_ (ab)^(ab) T_ (cb)^(cb)

Out[153]=

R_ (ac)^(ac) == S_ (ab)^(ab) . T_ (cb)^(cb)

Out[154]=

Out[155]=

Out[156]=

True

Double contractions

The following are the two double contractions of S⊗T.

In[157]:=

Sdd[a, b] Tuu[a, b]

contracta = %//ToArrayValues[]

Out[157]=

S_ (ab)^(ab) T_ (ab)^(ab)

Out[158]=

In[159]:=

Sdd[a, b] Tuu[b, a]

contractb = %//ToArrayValues[]

Out[159]=

S_ (ab)^(ab) T_ (ba)^(ba)

Out[160]=

Doing the same calculation by contracting the outer product array calculated in the previous section.

In[161]:=

ContractArray[ST, {1, 3}, {2, 4}]

% == contracta

Out[161]=

Out[162]=

True

In[163]:=

ContractArray[ST, {1, 4}, {2, 3}]

% == contractb

Out[163]=

Out[164]=

True

Doing the same calculations in dot mode.

In[165]:=

Sdd[a, b] Tuu[a, b]

%//DotTensorFactors[{1, 2}]

%//ExpandDotArray[Tensor[S, __]]//ExpandDotArray[Tensor[T, __], {2, 1}]

%//DotOperate[1, Function[{A, B}, ContractArray[A . B, {1, 2}]]]

% == contracta

Out[165]=

S_ (ab)^(ab) T_ (ab)^(ab)

Out[166]=

S_ (ab)^(ab) . T_ (ab)^(ab)

Out[167]=

Out[168]//MatrixForm=

Out[169]=

True

In[170]:=

Sdd[a, b] Tuu[b, a]

%//DotTensorFactors[{1, 2}]

%//ExpandDotArray[Tensor[S, __]]//ExpandDotArray[Tensor[T, __], {2, 1}]

%//DotOperate[1, Function[{A, B}, ContractArray[A . B, {1, 2}]]]

% == contractb

Out[170]=

S_ (ab)^(ab) T_ (ba)^(ba)

Out[171]=

S_ (ab)^(ab) . T_ (ba)^(ba)

Out[172]=

Out[173]//MatrixForm=

Out[174]=

True

4. Contracting a 3rd order tensor with a 2nd order tensor

With a 2nd and 3rd order tensor we have many possiblities for contraction. Let's just do one case of a double contraction.

In[175]:=

ud[i] == Tduu[i, j, k] Sdd[j, k]

(step1 = %//ToArrayValues[])//TableForm

Out[175]=

u_i^i == S_ (jk)^(jk) T_ (ijk)^(ijk)

Out[176]//TableForm=

Performing the same calculation in dot mode...

In[177]:=

ud[i] == Tduu[i, j, k] Sdd[j, k]

MapAt[DotTensorFactors[{2, 1}], %, 2]

%//ExpandDotArray[Tensor[u | T, __]]//ExpandDotArray[Tensor[S, __], {2, 1}]

step2 = MapAt[DotOperate[1, ContractArray[#1 . #2, {2, 3}] &], %, 2]

Thread[%/.MatrixForm→Identity] === step1

Out[177]=

u_i^i == S_ (jk)^(jk) T_ (ijk)^(ijk)

Out[178]=

u_i^i == T_ (ijk)^(ijk) . S_ (jk)^(jk)

Out[179]=

Out[180]=

Out[181]=

True

Constructing T⊗S and contracting...

In[182]:=

(TS = Outer[Times, Tduu[a, b, c]//ToArrayValues[], Sdd[d, e]//ToArrayValues[]])//MatrixForm

ContractArray[TS, {2, 4}, {3, 5}]//MatrixForm

MatrixForm[%] === Last[step2]

Out[182]//MatrixForm=

Out[183]//MatrixForm=

Out[184]=

True

5. Sometimes Mathematica Array Methods Are Better

If we wanted to calculate the determinant of a tensor with the following values...

In[185]:=

SetTensorValues[Tuu[i, j], ({{1, 0, 1}, {2, 3, 5}, {4, -1, -2}})]

We could expand the tensor and use the Mathematica Det function.

In[186]:=

Det[Tuu[i, j]//ToArrayValues[]]

Out[186]=

-15

Or we could calculate using the Levi-Civita symbol and Tensorial methods.

In[187]:=

SetTensorValues[εddd[i, j, k], PermutationPseudotensor[NDim]]

and then evaluate the determinant by tensor methods. We obtain the same answer.

In[188]:=

εddd[i, j, k] Tuu[1, i] Tuu[2, j] Tuu[3, k]

%//EinsteinSum[]

Out[188]=

T_ (1i)^(1i) T_ (2j)^(2j) T_ (3k)^(3k) ε_ (ijk)^(ijk)

Out[189]=

-15

The array method is approximately 3 times faster.

In[190]:=

Do[Det[Tuu[i, j]//ArrayExpansion[i, j]], {100}]//Timing

Do[εddd[i, j, k] Tuu[1, i] Tuu[2, j] Tuu[3, k]//SumExpansion[i, j, k], {100}]//Timing

Out[190]=

{1.203 Second, Null}

Out[191]=

{3.516 Second, Null}

In[192]:=

ClearTensorValues/@{Tuu[i, j], εddd[i, j, k]} ;

ClearTensorShortcuts[{{e, f, u, v}, 1}, {{R, S, T, Λ, X}, 2}, {{R, T, ε, X}, 3}]


Created by Mathematica  (November 22, 2007) Valid XHTML 1.1!