NumPy is the foundation of the Python machine learning stack. NumPy allows for efficient operations on the data structures often used in machine learning: vectors, matrices, and tensors. While NumPy is not the focus of this book, it will show up frequently throughout the following chapters. This chapter covers the most common NumPy operations we are likely to run into while working on machine learning workflows.
NumPy’s main data structure is the multidimensional array. To create a vector, we simply create a one-dimensional array. Just like vectors, these arrays can be represented horizontally (i.e., rows) or vertically (i.e., columns).
To create a matrix we can use a NumPy two-dimensional array. In our solution, the matrix contains three rows and two columns (a column of 1s and a column of 2s).
NumPy actually has a dedicated matrix data structure:
matrix_object=np.mat([[1,2],[1,2],[1,2]])
matrix([[1, 2],
[1, 2],
[1, 2]])
However, the matrix data structure is not recommended for two reasons. First, arrays are the de facto standard data structure of NumPy. Second, the vast majority of NumPy operations return arrays, not matrix objects.
Given data with very few nonzero values, you want to efficiently represent it.
A frequent situation in machine learning is having a huge amount of data; however, most of the elements in the data are zeros. For example, imagine a matrix where the columns are every movie on Netflix, the rows are every Netflix user, and the values are how many times a user has watched that particular movie. This matrix would have tens of thousands of columns and millions of rows! However, since most users do not watch most movies, the vast majority of elements would be zero.
Sparse matrices only store nonzero elements and assume all other values will be zero, leading to significant computational savings. In our solution, we created a NumPy array with two nonzero values, then converted it into a sparse matrix. If we view the sparse matrix we can see that only the nonzero values are stored:
# View sparse matrix(matrix_sparse)
(1, 1) 1 (2, 0) 3
There are a number of types of sparse matrices. However, in compressed sparse row (CSR) matrices, (1, 1) and (2, 0) represent the (zero-indexed) indices of the non-zero values 1 and 3, respectively. For example, the element 1 is in the second row and second column. We can see the advantage of sparse matrices if we create a much larger matrix with many more zero elements and then compare this larger matrix with our original sparse matrix:
# Create larger matrixmatrix_large=np.array([[0,0,0,0,0,0,0,0,0,0],[0,1,0,0,0,0,0,0,0,0],[3,0,0,0,0,0,0,0,0,0]])# Create compressed sparse row (CSR) matrixmatrix_large_sparse=sparse.csr_matrix(matrix_large)
# View original sparse matrix(matrix_sparse)
(1, 1) 1 (2, 0) 3
# View larger sparse matrix(matrix_large_sparse)
(1, 1) 1 (2, 0) 3
As we can see, despite the fact that we added many more zero elements in the larger matrix, its sparse representation is exactly the same as our original sparse matrix. That is, the addition of zero elements did not change the size of the sparse matrix.
As mentioned, there are many different types of sparse matrices, such as compressed sparse column, list of lists, and dictionary of keys. While an explanation of the different types and their implications is outside the scope of this book, it is worth noting that while there is no “best” sparse matrix type, there are meaningful differences between them and we should be conscious about why we are choosing one type over another.
NumPy’s arrays make that easy:
# Load libraryimportnumpyasnp# Create row vectorvector=np.array([1,2,3,4,5,6])# Create matrixmatrix=np.array([[1,2,3],[4,5,6],[7,8,9]])# Select third element of vectorvector[2]
3
# Select second row, second columnmatrix[1,1]
5
Like most things in Python, NumPy arrays are zero-indexed, meaning that the index of the first element is 0, not 1. With that caveat, NumPy offers a wide variety of methods for selecting (i.e., indexing and slicing) elements or groups of elements in arrays:
# Select all elements of a vectorvector[:]
array([1, 2, 3, 4, 5, 6])
# Select everything up to and including the third elementvector[:3]
array([1, 2, 3])
# Select everything after the third elementvector[3:]
array([4, 5, 6])
# Select the last elementvector[-1]
6
# Select the first two rows and all columns of a matrixmatrix[:2,:]
array([[1, 2, 3],
[4, 5, 6]])
# Select all rows and the second columnmatrix[:,1:2]
array([[2],
[5],
[8]])
Use shape, size, and ndim:
# Load libraryimportnumpyasnp# Create matrixmatrix=np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12]])# View number of rows and columnsmatrix.shape
(3, 4)
# View number of elements (rows * columns)matrix.size
12
# View number of dimensionsmatrix.ndim
2
This might seem basic (and it is); however, time and again it will be valuable to check the shape and size of an array both for further calculations and simply as a gut check after some operation.
Use NumPy’s vectorize:
# Load libraryimportnumpyasnp# Create matrixmatrix=np.array([[1,2,3],[4,5,6],[7,8,9]])# Create function that adds 100 to somethingadd_100=lambdai:i+100# Create vectorized functionvectorized_add_100=np.vectorize(add_100)# Apply function to all elements in matrixvectorized_add_100(matrix)
array([[101, 102, 103],
[104, 105, 106],
[107, 108, 109]])
NumPy’s vectorize class converts a function into a function that can
apply to all elements in an array or slice of an array. It’s worth noting that vectorize is essentially a for loop over the elements and does not increase performance. Furthermore, NumPy arrays allow us to perform operations between arrays even if their dimensions are not the same (a process called broadcasting). For example, we can create a much simpler version of our solution using broadcasting:
# Add 100 to all elementsmatrix+100
array([[101, 102, 103],
[104, 105, 106],
[107, 108, 109]])
Use NumPy’s max and min:
# Load libraryimportnumpyasnp# Create matrixmatrix=np.array([[1,2,3],[4,5,6],[7,8,9]])# Return maximum elementnp.max(matrix)
9
# Return minimum elementnp.min(matrix)
1
Often we want to know the maximum and minimum value in an array or
subset of an array. This can be accomplished with the max and min
methods. Using the axis parameter we can also apply the operation along a certain axis:
# Find maximum element in each columnnp.max(matrix,axis=0)
array([7, 8, 9])
# Find maximum element in each rownp.max(matrix,axis=1)
array([3, 6, 9])
Just like with max and min, we can easily get descriptive statistics about the whole matrix or do calculations along a single axis:
# Find the mean value in each columnnp.mean(matrix,axis=0)
array([ 4., 5., 6.])
Use NumPy’s reshape:
# Load libraryimportnumpyasnp# Create 4x3 matrixmatrix=np.array([[1,2,3],[4,5,6],[7,8,9],[10,11,12]])# Reshape matrix into 2x6 matrixmatrix.reshape(2,6)
array([[ 1, 2, 3, 4, 5, 6],
[ 7, 8, 9, 10, 11, 12]])
reshape allows us to restructure an array so that we maintain
the same data but it is organized as a different number of rows and
columns. The only requirement is that the shape of the original and new
matrix contain the same number of elements (i.e., the same size). We can
see the size of a matrix using size:
matrix.size
12
One useful argument in reshape is -1, which effectively means “as many
as needed,” so reshape(-1, 1) means one row and as many columns as
needed:
matrix.reshape(1,-1)
array([[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]])
Finally, if we provide one integer, reshape will return a 1D array of
that length:
matrix.reshape(12)
array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
Use the T method:
# Load libraryimportnumpyasnp# Create matrixmatrix=np.array([[1,2,3],[4,5,6],[7,8,9]])# Transpose matrixmatrix.T
array([[1, 4, 7],
[2, 5, 8],
[3, 6, 9]])
Transposing is a common operation in linear algebra where the column and row indices of each element are swapped. One nuanced point that is typically overlooked outside of a linear algebra class is that, technically, a vector cannot be transposed because it is just a collection of values:
# Transpose vectornp.array([1,2,3,4,5,6]).T
array([1, 2, 3, 4, 5, 6])
However, it is common to refer to transposing a vector as converting a row vector to a column vector (notice the second pair of brackets) or vice versa:
# Tranpose row vectornp.array([[1,2,3,4,5,6]]).T
array([[1],
[2],
[3],
[4],
[5],
[6]])
Use flatten:
# Load libraryimportnumpyasnp# Create matrixmatrix=np.array([[1,2,3],[4,5,6],[7,8,9]])# Flatten matrixmatrix.flatten()
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
Use NumPy’s linear algebra method matrix_rank:
# Load libraryimportnumpyasnp# Create matrixmatrix=np.array([[1,1,1],[1,1,10],[1,1,15]])# Return matrix ranknp.linalg.matrix_rank(matrix)
2
The rank of a matrix is the dimensions of the vector space spanned by
its columns or rows. Finding the rank of a matrix is easy in NumPy thanks to matrix_rank.
Use NumPy’s linear algebra method det:
# Load libraryimportnumpyasnp# Create matrixmatrix=np.array([[1,2,3],[2,4,6],[3,8,9]])# Return determinant of matrixnp.linalg.det(matrix)
0.0
It can sometimes be useful to calculate the determinant of a matrix.
NumPy makes this easy with det.
Use diagonal:
# Load libraryimportnumpyasnp# Create matrixmatrix=np.array([[1,2,3],[2,4,6],[3,8,9]])# Return diagonal elementsmatrix.diagonal()
array([1, 4, 9])
NumPy makes getting the diagonal elements of a matrix easy with
diagonal. It is also possible to get a diagonal off from the main
diagonal by using the offset parameter:
# Return diagonal one above the main diagonalmatrix.diagonal(offset=1)
array([2, 6])
# Return diagonal one below the main diagonalmatrix.diagonal(offset=-1)
array([2, 8])
Use trace:
# Load libraryimportnumpyasnp# Create matrixmatrix=np.array([[1,2,3],[2,4,6],[3,8,9]])# Return tracematrix.trace()
14
The trace of a matrix is the sum of the diagonal elements and is often
used under the hood in machine learning methods. Given a NumPy multidimensional array, we can calculate the trace using trace. We can also return the diagonal of a matrix and calculate its sum:
# Return diagonal and sum elementssum(matrix.diagonal())
14
# Load libraryimportnumpyasnp# Create matrixmatrix=np.array([[1,-1,3],[1,1,6],[3,8,9]])# Calculate eigenvalues and eigenvectorseigenvalues,eigenvectors=np.linalg.eig(matrix)
# View eigenvalueseigenvalues
array([ 13.55075847, 0.74003145, -3.29078992])
# View eigenvectorseigenvectors
array([[-0.17622017, -0.96677403, -0.53373322],
[-0.435951 , 0.2053623 , -0.64324848],
[-0.88254925, 0.15223105, 0.54896288]])
Eigenvectors are widely used in machine learning libraries. Intuitively, given a linear transformation represented by a matrix, A, eigenvectors are vectors that, when that transformation is applied, change only in scale (not direction). More formally:
where A is a square matrix, λ contains the eigenvalues and v contains the eigenvectors. In NumPy’s linear algebra toolset, eig lets us calculate the eigenvalues, and eigenvectors of any square matrix.
Use NumPy’s dot:
# Load libraryimportnumpyasnp# Create two vectorsvector_a=np.array([1,2,3])vector_b=np.array([4,5,6])# Calculate dot productnp.dot(vector_a,vector_b)
32
The dot product of two vectors, a and b, is defined as:
where ai is the ith element of vector
a. We can use NumPy’s dot class to calculate the dot
product. Alternatively, in Python 3.5+ we can use the new @ operator:
# Calculate dot productvector_a@vector_b
32
Use NumPy’s add and subtract:
# Load libraryimportnumpyasnp# Create matrixmatrix_a=np.array([[1,1,1],[1,1,1],[1,1,2]])# Create matrixmatrix_b=np.array([[1,3,1],[1,3,1],[1,3,8]])# Add two matricesnp.add(matrix_a,matrix_b)
array([[ 2, 4, 2],
[ 2, 4, 2],
[ 2, 4, 10]])
# Subtract two matricesnp.subtract(matrix_a,matrix_b)
array([[ 0, -2, 0],
[ 0, -2, 0],
[ 0, -2, -6]])
Alternatively, we can simply use the + and -
operators:
# Add two matricesmatrix_a+matrix_b
array([[ 2, 4, 2],
[ 2, 4, 2],
[ 2, 4, 10]])
Use NumPy’s dot:
# Load libraryimportnumpyasnp# Create matrixmatrix_a=np.array([[1,1],[1,2]])# Create matrixmatrix_b=np.array([[1,3],[1,2]])# Multiply two matricesnp.dot(matrix_a,matrix_b)
array([[2, 5],
[3, 7]])
Alternatively, in Python 3.5+ we can use the @
operator:
# Multiply two matricesmatrix_a@matrix_b
array([[2, 5],
[3, 7]])
If we want to do element-wise multiplication, we can use the *
operator:
# Multiply two matrices element-wisematrix_a*matrix_b
array([[1, 3],
[1, 4]])
Use NumPy’s linear algebra inv method:
# Load libraryimportnumpyasnp# Create matrixmatrix=np.array([[1,4],[2,5]])# Calculate inverse of matrixnp.linalg.inv(matrix)
array([[-1.66666667, 1.33333333],
[ 0.66666667, -0.33333333]])
The inverse of a square matrix, A, is a second matrix A–1, such that:
where I is the identity matrix. In NumPy we can use linalg.inv to calculate A–1 if it exists. To see this in action, we can multiply a matrix by its inverse and the result is the identity matrix:
# Multiply matrix and its inversematrix@np.linalg.inv(matrix)
array([[ 1., 0.],
[ 0., 1.]])
Use NumPy’s random:
# Load libraryimportnumpyasnp# Set seednp.random.seed(0)# Generate three random floats between 0.0 and 1.0np.random.random(3)
array([ 0.5488135 , 0.71518937, 0.60276338])
NumPy offers a wide variety of means to generate random numbers, many more than can be covered here. In our solution we generated floats; however, it is also common to generate integers:
# Generate three random integers between 1 and 10np.random.randint(0,11,3)
array([3, 7, 9])
Alternatively, we can generate numbers by drawing them from a distribution:
# Draw three numbers from a normal distribution with mean 0.0# and standard deviation of 1.0np.random.normal(0.0,1.0,3)
array([-1.42232584, 1.52006949, -0.29139398])
# Draw three numbers from a logistic distribution with mean 0.0 and scale of 1.0np.random.logistic(0.0,1.0,3)
array([-0.98118713, -0.08939902, 1.46416405])
# Draw three numbers greater than or equal to 1.0 and less than 2.0np.random.uniform(1.0,2.0,3)
array([ 1.47997717, 1.3927848 , 1.83607876])
Finally, it can sometimes be useful to return the same random numbers multiple times to get predictable, repeatable results. We can do this by setting the “seed” (an integer) of the pseudorandom generator. Random processes with the same seed will always produce the same output. We will use seeds throughout this book so that the code you see in the book and the code you run on your computer produces the same results.