© Michael Paluszek and Stephanie Thomas  2019
Michael Paluszek and Stephanie ThomasMATLAB Machine Learning Recipeshttps://doi.org/10.1007/978-1-4842-3916-2_9

9. Classification of Numbers Using Neural Networks

Michael Paluszek1  and Stephanie Thomas1
(1)
Plainsboro, NJ, USA
 

Pattern recognition in images is a classic application of neural nets. This chapter builds upon the previous one by exploring multi-layer networks, which fall into the Machine Learning branch of our Autonomous Learning taxonomy. In this case, we will look at images of computer-generated digits, and the problem of identifying the digits correctly. These images will represent numbers from scanned documents. Attempting to capture the variation in digits with algorithmic rules, considering fonts and other factors, quickly becomes impossibly complex, but with a large number of examples, a neural net can readily perform the task. We allow the weights in the net to perform the job of inferring rules about how each digit may be shaped, rather than codifying them explicitly.

../images/420697_2_En_9_Chapter/420697_2_En_9_Figa_HTML.gif

For the purposes of this chapter, we will limit ourselves to images of a single digit. The process of segmenting a series of digits into individual images is one that may be solved by many techniques, not just neural nets.

9.1 Generate Test Images with Defects

9.1.1 Problem

The first step in creating our classification system is to generate sample data. In this case, we want to load in images of numbers for 0 to 9 and generate test images with defects. For our purposes, defects will be introduced with simple Poisson or shot noise (a random number with a standard deviation of the square root of the pixel values).

9.1.2 Solution

We will generate the images in MATLAB by writing a digit to an axis using text, then creating an image using print. There is an option to capture the pixel data directly from print without creating an interim file, which we will utilize. We will extract the 16x16 pixel area with our digit, and then apply the noise. We will also allow the font to be an input. See Figure 9.1 for examples.
../images/420697_2_En_9_Chapter/420697_2_En_9_Fig1_HTML.jpg
Figure 9.1

A sample image of the digits 0 and 1 with noise added.

9.1.3 How It Works

The code listing for the CreateDigitImage function is below. The inputs are the digit and the desired font. It creates a 16x16 pixel image of a single digit. The intermediate figure used to display the digit text is invisible. We will use the ’RGBImage’ option for print to get the pixel values without creating an image file. The function has options for a built-in demo that will create pixels for the digit 0 and display the image in a figure if no inputs or outputs are given. The default font if none is given is Courier.

function pixels = CreateDigitImage( num, fontname )
if nargin < 1
   num = 0;
   CreateDigitImage( num );
    return;
end
if nargin < 2
   fontname =  ’courier’;
end
 fonts =  listfonts ;
 avail = strcmpi(fontname,fonts);
if ~any(avail)
    error( ’MachineLearning:CreateDigitImage’,...
      ’Sorry,␣the␣font␣’ ’%s’ ’␣is␣not␣available.’,fontname);
end
 f =  figure ( ’Name’, ’Digit’, ’visible’, ’off’);
 a1 =  axes (  ’Parent’, f,  ’box’,  ’off’,  ’units’,  ’pixels’,  ’position’, [0 0 16 16] );
  % 20 point font digits are 15 pixels tall (on Mac OS)
  % text(axes,x,y,string)
text(a1,4,10,num2str(num), ’fontsize’,19, ’fontunits’, ’pixels’, ’unit’, ’pixels’,...
   ’fontname’,fontname)
  % Obtain image data using print and convert to grayscale
 cData =  print ( ’-RGBImage’, ’-r0’);
 iGray =  rgb2gray (cData);
  % Print image coordinate system starts from upper left of the figure, NOT the
  % bottom, so our digit is in the LAST 16 rows and the FIRST 16 columns
 pixels = iGray( end -15: end ,1:16);
  % Apply Poisson (shot) noise; must convert the pixel values to double for the
  % operation and then convert them back to uint8 for the sum. the uint8 type will
  % automatically handle overflow above 255 so there is no need to apply a limit.
 noise =  uint8 ( sqrt ( double (pixels)).* randn (16,16));
 pixels = pixels - noise;
close(f);
if nargout == 0
   h =  figure ( ’name’, ’Digit␣Image’);
    imagesc(pixels);
    colormap(h, ’gray’);
    grid on
    set(gca, ’xtick’,1:16)
    set(gca, ’ytick’,1:16)
    colorbar
end

TIP

Note that we check that the font exists using listfonts before trying to use it, and throw an error if it’s not found.

Now, we can create the training data using images generated with our new function. In the recipes below we will use data for both a single-digit identification and a multiple-digit identification net. We use a for loop to create a set of images and save them to a MAT-file using the helper function SaveTS. This saves the training sets with their input and output, and indices for training and testing, in a special structure format. Note that we scale the pixel values, which are nominally integers with a value from 0 to 255, to have values between 0 and 1.

Our data generating script DigitTrainingData uses a for loop to create a set of noisy images for each desired digit (between 0 and 9). It saves the data along with indices for data to use for training. The pixel output of the images is scaled from 0 (black) to 1 (white), so it is suitable for neuron activation in the neural net. It has two flags at the top, one for a one-digit mode and a second to automatically change fonts.

  %% Generate the training data
  % Control switches
 oneDigitMode = true;   % the first digit is the desired output
 changeFonts = true;    % randomly select a font
  % Number of training data sets
 digits     = 0:5;
 nImagesPer = 20;
  % Prepare data
 nDigits   =  length (digits);
 nImages   = nDigits*nImagesPer;
input     = zeros(256,nImages);
 output    =  zeros (1,nImages);
 trainSets = [];
 testSets  = [];
if (changeFonts)
   fonts = { ’times’, ’helvetica’, ’courier’};
else
   fonts =  ’times’;
   kFont = 1;
end
  % Loop through digits
 kImage = 1;
for j = 1:nDigits
    fprintf( ’Digit␣%d\n’, digits(j));
    for k = 1:nImagesPer
      if (changeFonts)
        % choose a font randomly
       kFont =  ceil ( rand *3);
      end
     pixels = CreateDigitImage( digits(j), fonts{kFont} );
      % scale the pixels to a range 0 to 1
     pixels =  double (pixels);
     pixels = pixels/255;
      input(:,kImage) = pixels(:);
      if (oneDigitMode)
        if (j == 1)
         output(j,kImage) = 1;
        end
      else
       output(j,kImage) = 1;
      end
     kImage = kImage + 1;
    end
   sets =  randperm (10);
   trainSets = [trainSets (j-1)*nImages+sets(1:5)];  %#ok<AGROW>
   testSets = [testSets (j-1)*nImages+sets(6:10)];  %#ok<AGROW>
end
  % Use 75% of the images for training and save the rest for testing
 trainSets =  sort ( randperm (nImages, floor (0.75*nImages)));
 testSets =  setdiff (1:nImages,trainSets);
  % Save the training set to a MAT-file (dialog window will open)
 SaveTS(  input , output, trainSets, testSets );

The helper function will ask for a filename and save the training set. You can load it at the command line to verify the fields. Here’s an example with the training and testing sets truncated:

 >> trainingData =  load ( ’Digit0TrainingTS’)
 trainingData =
   struct with fields:
     Digit0TrainingTS: [1x1 struct]
 >> trainingData.Digit0TrainingTS
ans =
   struct with fields:
         inputs: [256x120 double]
     desOutputs: [1x120 double]
      trainSets: [1 3 4 5 6 8 9 ...  115 117 118 120]
       testSets: [2 7 16 20 28 33 37 ... 112 114 116 119]  
Note that the output field is a Boolean with a value of 1 when the image is of the desired digit and 0 when it is not. In the single-digit data sets, selected by using the Boolean flag oneDigitMode, the output is a single row. In a multi-digit set, it has as many rows as there are digits in the set. The images use a randomly selected font from among Times, Helvetica, and Courier if the changeFonts Boolean is true. Table 9.1 shows the three training sets created using this script.
Table 9.1

Digit Training Sets

’Digit0TrainingTS’

Single-digit set with 120 images of the digits 0 through 5, all in the same font

’Digit0FontsTS’

Single-digit set of 0 through 5 with random fonts

’DigitTrainingTS’

Multi-digit set with 200 images of the digits 0 through 9, same font

We have created the following sets for use in these recipes:

Figure 9.2 shows example images of the digit 2 in the three different fonts, from Digit0TrainingTS.
../images/420697_2_En_9_Chapter/420697_2_En_9_Fig2_HTML.png
Figure 9.2

Images of the digit 2 in different fonts.

9.2 Create the Neural Net Functions

9.2.1 Problem

We want to create a neural net tool that can be trained to identify the digits. In this recipe we will discuss the functions underlying the NeuralNetDeveloper tool, shown in the next recipe. This interface does not use the latest graphic user interface (GUI)-building features of MATLAB, so we will not get into detail about the GUI code itself although the full GUI is available in the companion code.

9.2.2 Solution

The GUI uses a multi-layer feed-forward (MLFF) neural network function to classify digits. In this type of network, each neuron depends only on the inputs it receives from the previous layer. We will discuss the function that implements the neuron.

9.2.3 How It Works

The basis of the neural net is the Neuron function. Our neuron function provides six different activation types: sign, sigmoid mag, step, logistic, tanh, and sum [22]. This can be seen in Figure 9.3.

The default type of activation function is $$\tanh $$. Two other functions useful in multi-layer networks are exponential (sigmoid logistic function):
$$\displaystyle \begin{aligned} \frac{1}{1 + e^{-x}} \end{aligned} $$
(9.1)
or sigmoid magnitude:
$$\displaystyle \begin{aligned} \frac{x}{1 +|x|} \end{aligned} $$
(9.2)
where “sigmoid” refers to a function with an S-shape.

It is a good idea to try different activation functions for any new problem. The activation function is what distinguishes a neural network, and machine learning, from curve fitting. The input x would be the sum of all inputs plus a bias.

TIP

The sum activation function is linear and the output is just the sum of the inputs.

The following code shows Neuron, which implements a single neuron in the neural net. It has as an input the type, or activation function, and the outputs include the derivative of this function. A default type of log is enabled (for the sigmoid logistic function).
../images/420697_2_En_9_Chapter/420697_2_En_9_Fig3_HTML.png
Figure 9.3

Available neuron activation functions: sign, sigmoid mag, step, logistic (log), tanh, and sum.

function [y, dYDX] = Neuron( x, type, t )
  % Input processing
ifnargin < 1 )
   x = [];
end
ifnargin < 2 )
   type = [];
end
ifnargin < 3 )
   t = 0;
end
ifisempty(type) )
   type =  ’log’;
end
ifisempty(x) )
   x =  sort ( [ linspace (-5,5) 0 ]);
end
  % Compute the function value and the derivative
 switch  lower deblank (type) )
   case  ’tanh’
     yX   =  tanh (x);
     dYDX =  sech (x).^2;
   case  ’log’
      % sigmoid logistic function
     yX   = 1./(1 +  exp (-x));
     dYDX = yX.*(1 - yX);
   case  ’mag’
      % sigmoid magnitude function
     d    = 1 +  abs (x);
     yX   = x./d;
     dYDX = 1./d.^2;
   case  ’sign’
     yX           =  ones ( size (x));
     yX(x < 0)    = -1;
     dYDX         =  zeros ( size (yX));
     dYDX(x == 0) = inf;
   case  ’step’
     yX           =  ones ( size (x));
     yX(x < t)    = 0;
     dYDX         =  zeros ( size (yX));
     dYDX(x == t) = inf;
   case  ’sum’
     yX   = x;
     dYDX =  ones ( size (yX));
   otherwise
      error([type  ’␣is␣not␣recognized’])
end
  % Output processing
ifnargout == 0 )
   PlotSet( x, yX,  ’x␣label’,  ’Input’,  ’y␣label’,  ’Output’,...
      ’plot␣title’, [type  ’␣Neuron’] );
   PlotSet( x, dYDX,  ’x␣label’, ’Input’,  ’y␣label’, ’dOutput/dX’,...
      ’plot␣title’,[ ’Derivative␣of␣’ type  ’␣Function’] );
else
   y = yX;
end

Neurons are combined into the feed-forward neural network using a simple data structure of layers and weights. The input to each neuron is a combination of the signal y, the weight w, and the bias w 0, as in this line:

 y = Neuron( w*y - w0,  type  );  

The output of the network is calculated by the function NeuralNetMLFF. This computes the output of a MLFF neural net. Note that this also outputs the derivatives as obtained from the neuron activation functions, for use in training. The function is described below:

  %% NEURALNETMLFF Computes the output of a multilayer feed-forward neural net.
  % The input layer is a data structure that contains the network data.
  % This data structure must contain the weights and activation functions
  % for each layer. Calls the Neuron function.
  %
  % The output layer is the input data structure augmented to include
  % the inputs, outputs, and derivatives of each layer for each run.
  %% Form
  %   [y, dY, layer] = NeuralNetMLFF( x, network )  

The input and output layers are data structures containing the weights and activation functions for each layer. Our network will use back propagation as a training method [19]. This is a gradient descent method and it uses the derivatives output by the network directly. Because of this use of derivatives, any threshold functions such as a step function are substituted with a sigmoid function for the training to make it continuous and differentiable. The main parameter is the learning rate α, which multiplies the gradient changes applied to the weights in each iteration. This is implemented in NeuralNetTraining.

The NeuralNetTraining function performs training, that is, it computes the weights in the neurons, using back propagation. If no inputs are given, it will do a demo for the network where node 1 and node 2 use exp functions for the activation functions. The function form is given below.

  %% NEURALNETTRAINING Training using back propagation.
  % Computes the weights for a neural net using back propagation. If no inputs are
  % given it will do a demo for the network where node 1 and node 2 use exp
  % functions. Calls NeuralNetMLFF which implements the network.
  %
  %   sin(    x) -- node 1
  %              \ /      \
  %               \        ---> Output
  %              / \      /
  %   sin(0.2*x) -- node 2
  %
  %% Form
  %  [w, e, layer] = NeuralNetTraining( x, y, layer )  

The back propagation is performed by calling NeuralNetMLFF in a loop for the number of runs requested. A wait bar is displayed, since training can take some time. Note that this can handle any number of intermediate layers. The field alpha contains the learning rate for the method.

  % Perform back propagation
 h =  waitbar (0,  ’Neural␣Net␣Training␣in␣Progress’ );
for j = 1:nRuns
    % Work backward from the output layer
   [yN, dYN,layerT] = NeuralNetMLFF( x(:,j), temp );
   e(:,j)           = y(:,j) - yN(:,1);  % error
    for k = 1:nLayers
     layer(k,j).w  = temp.layer(k,1).w;
     layer(k,j).w0 = temp.layer(k,1).w0;
     layer(k,j).x  = layerT(k,1).x;
     layer(k,j).y  = layerT(k,1).y;
     layer(k,j).dY = layerT(k,1).dY;
    end
    % Last layer delta is calculated first
   layer(nLayers,j).delta = e(:,j).*dYN(:,1);
    % Intermediate layers use the subsequent layer’s delta
    for k  = (nLayers-1):-1:1
     layer(k,j).delta = layer(k,j).dY.*(temp.layer(k+1,1).w’*layer(k+1,j).delta);
    end
    % Now that we have all the deltas, update the weights (w) and biases (w0)
    for k = 1:nLayers
     temp.layer(k,1).w  = temp.layer(k,1).w  + layer(k,1).alpha*layer(k,j).delta*layer(k,j).x’;
     temp.layer(k,1).w0 = temp.layer(k,1).w0 - layer(k,1).alpha*layer(k,j).delta;
    end
    waitbar(j/nRuns);
end
 w = temp.layer;
close(h);  

9.3 Train a Network with One Output Node

9.3.1 Problem

We want to train the neural network to classify numbers. A good first step is identifying a single number. In this case, we will have a single output node, and our training data will include our desired digit, starting with 0, plus a few other digits (1–5).

9.3.2 Solution

We can create this neural network with our GUI, shown in Figure 9.4. The network flows from left to right in the graphic. We can try training the net with the output node having different types, such as sign and logistic. In our case, we start with a sigmoid function for the hidden layer and a step function for the output node.

The box on the upper left of the GUI lets you set up the network with the number of inputs, in this case one per pixel, the number of outputs, one because we want to identify one digit, and the number of hidden layers. The box to the right lets us design each layer. All neurons in a layer are identical. The box on the far right lets us set the weight for each input to the node and the bias for the node. The path is the path to the training data. The display shows the resulting network. The graphic is useful, but the number of nodes in the hidden layer make it hard to read.

Our GUI has a separate training window, Figure 9.5. It has buttons for loading and saving training sets, training, and testing the trained neural net. It will plot results automatically based on preferences selected. In this case, we have loaded the training set from Recipe 9.1, which uses multiple fonts, Digit0FontsTS, which is displayed in the top of the figure window.
../images/420697_2_En_9_Chapter/420697_2_En_9_Fig4_HTML.jpg
Figure 9.4

A neural net with 256 inputs, one per pixel, an intermediate layer with 30 nodes, and one output.

9.3.3 How It Works

We build the network using the GUI with 256 inputs, one for each pixel; 30 nodes in one hidden layer; and 1 output node. We load the training data from the first recipe into the Trainer GUI, and must select the number of training runs. 2000 runs should be sufficient if our neuron functions are selected properly. We have an additional parameter to select, the learning rate for the back propagation; it is reasonable to start with a value of 1.0. Note that our training data script assigned 75% of the images for training and reserved the remainder for testing, using randperm to extract a random set of images. The training records the weights and biases for each run and generates plots on completion. We can easily plot these for the output node, which has just 30 nodes and one bias. See Figure 9.6.
../images/420697_2_En_9_Chapter/420697_2_En_9_Fig5_HTML.jpg
Figure 9.5

The neural net training GUI opens when the train button is clicked in the developer.

The training function also outputs the training error as the net evolves and the root mean square error (RMSE), which has dropped off to near 1e-2 by about run 1000.

Since we have a large number of input neurons, a line plot is not very useful for visualizing the evolution of the weights for the hidden layer. However, we can view the weights at any given iteration as an image. Figure 9.8 shows the weights for the network with 30 nodes after training visualized using imagesc. We may wonder if we really need all 30 nodes in the hidden layer, or if we could extract the necessary number of features identifying our chosen digit with fewer. In the image on the right, the weights are shown sorted along the dimension of the input pixels for each node; we can clearly see that only a few nodes seem to have much variation from the random values they are initialized with, especially nodes 14, 18, and 21. That is, many of our nodes seem to be having no impact.

Since this visualization seems helpful, we add the code to the training GUI after the generation of the weights line plots. We create two images in one figure, the initial value of the weights on the left and the training values on the right. The HSV colormap looks more striking here than the default parula map. The code that generates the images in NeuralNetTrainer looks like this:
../images/420697_2_En_9_Chapter/420697_2_En_9_Fig6_HTML.png
Figure 9.6

Layer 2 node weights and biases evolution during training.

../images/420697_2_En_9_Chapter/420697_2_En_9_Fig7_HTML.png
Figure 9.7

Single digit training error and RMSE

../images/420697_2_En_9_Chapter/420697_2_En_9_Fig8_HTML.png
Figure 9.8

Single digit network, 30 node hidden layer weights. The plot on the left shows the weight value. The plot on the right shows the weights sorted by pixel for each node.

  % New figure: weights as image
 newH =  figure ( ’name’,[ ’Node␣Weights␣for␣Layer␣’  num2str (j)]);
 endWeights = [h.train.network(j,1).w(:);h.train.network(j, end ).w(:)];
 minW =  min (endWeights);
 maxW =  max (endWeights);
subplot(1,2,1)
imagesc(h.train.network(j,1).w,[minW maxW])
colorbar
ylabel( ’Output␣Node’)
xlabel( ’Input␣Node’)
title( ’Weights␣Before␣Training’)
subplot(1,2,2)
imagesc(h.train.network(j,end).w,[minW maxW])
colorbar
xlabel( ’Input␣Node’)
title( ’Weights␣After␣Training’)
          colormap hsv
 h.resultsFig = [newH; h.resultsFig];  
Note that we compute the minimum and maximum weight values among both the initial and final iterations, to scale the two color maps the same. Now, since many of our 30 initial nodes seemed unneeded, we reduce the number of nodes in that layer to 10, reinitialize the weights (randomly), and train again. Now we get our new figure with the weights displayed as an image before and after the training, Figure 9.9.
../images/420697_2_En_9_Chapter/420697_2_En_9_Fig9_HTML.png
Figure 9.9

Single digit network, 10-node hidden layer weights before and after training. The first row shows the data for the first layer, and the second for the second layer, which has just one output.

Now we can see more patches of colors that have diverged from the initial random weights in the images for the 256 pixels weights, and we see clear variation in the weights for the second layer as well. The GUI allows you to save the trained net for future use.

9.4 Testing the Neural Network

9.4.1 Problem

We want to test the single-digit neural net that we trained in the previous recipe.

9.4.2 Solution

We can test the network with inputs that were not used in training. This is explicitly allowed in the GUI, as it has separate indices for the training data and testing data. We selected 75% of our sample images for training and saved the remaining images for testing in our DigitTrainingData script from Recipe 9.1.

9.4.3 How It Works

In the case of our GUI, simply click the test button to run the neural network with each of the cases selected for testing.

Figure 9.10 shows the results for a network with the output node using the sigmoid magnitude function and another case with the output node using a step function, i.e., the output is limited to 0 or 1. Note that the first 20 images in the data set are the digit 0, with an output value of 1, and the rest are the digits 1 to 5, with an output value of 0. For the step function, the output is 1 for the first 20 sets and zero for all other sets, as desired. The sigmoid is similar, except that instead of being 0 after 20 sets, the output varies between +0.1 and -0.1. Between 20 and 120, it almost averages to zero, the same as the result from the step function. This shows that the activation functions are interpreting the data in a similar fashion.
../images/420697_2_En_9_Chapter/420697_2_En_9_Fig10_HTML.png
Figure 9.10

Neural net results with sigmoid (left) and step (right) activation functions.

9.5 Train a Network with Many Outputs

9.5.1 Problem

We want to build a neural net that can detect all ten digits separately.

9.5.2 Solution

Add nodes so that the output layer has ten nodes, each of which will be 0 or 1 when the representative digit (0–9) is input. Try the output nodes with different functions, such as logistic and step. Now that we have more digits, we will go back to having 30 nodes in the hidden layer.

9.5.3 How It Works

Our training data now consist of all 10 digits, with a binary output of zeros with a 1 in the correct slot. For example, the digit 1 will be represented as

[0 1 0 0 0 0 0 0 0]

The digit 3 would have a 1 in the fourth element. We follow the same procedure for training. We initialize the net, load the training set into the GUI, and specify the number of training runs for the back propagation.

../images/420697_2_En_9_Chapter/420697_2_En_9_Fig11_HTML.jpg
Figure 9.11

Net with multiple outputs.

The training data, in Figure 9.12, shows that much of the learning is achieved in the first 3000 runs.
../images/420697_2_En_9_Chapter/420697_2_En_9_Fig12_HTML.png
Figure 9.12

Training RMSE for a multiple-digit neural net.

The test data, in Figure 9.13, show that each set of digits (in sets of 20 in this case, for 200 total tests) is correctly identified.
../images/420697_2_En_9_Chapter/420697_2_En_9_Fig13_HTML.png
Figure 9.13

Test results for a multiple-digit neural net.

Once you have saved a net that is working well to a MAT-file, you can call it with new data using the function NeuralNetMLFF.

 >> data =  load ( ’NeuralNetMat’);
 >> network = data.DigitsStepNet;
 >> y = NeuralNetMLFF( DigitTrainingTS.inputs(:,1), data.DigitsStepNet )
 y =
      1
      0
      0
      0
      0
      0
      0
      0
      0
      0  
Again, it is fun to play with visualization of the neural net weights, to gain insight into the problem, and our problem is small enough that we can do so with images. We can view a single set of 256 weights for one hidden neuron as a 16x16 image, and view the whole set with each neuron in its own row as before (Figure 9.14), to see the patterns emerging.
../images/420697_2_En_9_Chapter/420697_2_En_9_Fig14_HTML.png
Figure 9.14

Multiple-digit neural net weights.

You can see parts of digits as mini-patterns in the individual node weights. Simply use imagesc with reshape like this:

 >>  figure ;
 >>  imagesc ( reshape (net.DigitsStepNet.layer(1).w(23,:),16,16));
 >>  title ( ’Weights␣to␣Hidden␣Node␣23’)  
and see images as in Figure 9.15. These three nodes (chosen at random) show a 1, 2, and 3. We would expect the 30 nodes to each have “noisy” replicas of the digits.
../images/420697_2_En_9_Chapter/420697_2_En_9_Fig15_HTML.png
Figure 9.15

Multiple-digit neural net weights.

9.6 Summary

This chapter has demonstrated neural learning to classify digits. An interesting extension to our tool would be the use of image data stores, rather than a matrix representation of the input data. Table 9.2 lists the functions and scripts included in the companion code.
Table 9.2

Chapter Code Listing

File

Description

DigitTrainingData

Create a training set of digit images.

CreateDigitImage

Create a noisy image of a single digit.

Neuron

Model an individual neuron with multiple activation functions.

NeuralNetMLFF

Compute the output of a MLFF neural net.

NeuralNetTraining

Training with back propagation.

DrawNeuralNet

Display a neural net with multiple layers.

SaveTS

Save a training set MAT-file with index data.