How do Tensors/Matrices Work?

Multi-dimensional data is needed to articulate the relationships and depth of information that can come from non-numerial data when given to a machine learning model.

Since machine learning models can’t “see”, you need to convert the information into the same structure of neural networks so they numbers can be “consumed” or “felt”

When you run something like np.shape(), you get a return valued of shape(y, x, z)

This is not the literal translation (like ‘(2, 3, 3)’) but that is how the structure works in my mind.

  • y is the “height” or number of rows of the matrix
  • x is the “width” or number of columns of the matrix
  • z is the “depth” or how “deep” the matrix is.

so the value of (2,3,5) can be translated into this statement:

CAUTION

a matrix two rows high, 3 columns wide, and 5 “pages” deep

I would say pages because I like to think of the tabular shape of matrices as a book and every increase in the z-axis are adding pages to a book.