my musings on bits, life, and universe

What's the fuss about .ndim( ) in PyTorch?

One of my long-standing doubts got cleared. When I create a tensor in PyTorch like this torch.randn(m, n, k), the dimension of this tensor is shown to be 3 when I use the .ndim()operator, and not (m x n x k). This was confusing to me for years because in context of mathematics the vector space to which the tensor belongs is clearly m×n×k whose dimension is (m x n x k).

The dimension of a vector space is basically the number of fundamental elements of that vector space from which you can generate all elements living in that vector space by doing two simple operations i.e. addition, and scalar multiplication. You can simply add the fundamental elements together by multiplying each one with different scalars to obtain any element of that vector space. Now, In our context, the vector space is a space containing all possible tensor of shape (m x n x k). These fundamental elements are called 'basis' in a more technical language.

Now, The dilemma arises because of a language issue, and not something fundamental in the sense that PyTorch .ndim() does not refer to the dimension of the vector space the tensor lives in, rather it refers to the number of independent 'axes' in that tensor in loose terms. To say it in another way, It refers to the number of indices required to plug out a single scalar from that tensor which is 3. But, the same thing in context of mathematics is called rank of the tensor.