Deeper Still: Convolutional Neural Networks
Two weeks ago, we began our machine study in earnest by constructing a full neural network. But this network was still quite simple by deep learning standards. In this article, we’re going to tackle a much more difficult problem: image recognition. Of course, we’ll still be using a well known data set with well-known results, so this is only the tip of the iceberg. We'll be using the MNIST data set. This set classifies images of handwritten digits as the numbers 0-9. This problem is so well-known that the folks at Tensor Flow refer to it as the “Hello World” of machine learning.
We’ll start this problem by using a very similar approach to what we used with the Iris data set. We’ll make a fully-connected neural network with two layers, and then use the “Adam” optimizer. This will give us some decent results by our beginner standards. But MNIST is a well known problem with a very large data set. So we’re going to hold ourselves to a higher standard of accuracy this time. This will force us to use some more advanced techniques. But to start with, let’s examine what we need to change to adapt our Iris model to work for the MNIST problem. As with the last couple weeks, the code for all this is on Github if you want to follow along.
Re-use and Recycle!
Generally, we can re-use most of the code we had with Iris, which is good news! We still have to make a few adjustments here and there though. First, we’ll use some different constants. We’ll use mnistFeatures
in place of irisFeatures
, and mnistLabels
instead of irisLabels
. We’ll also bump up the size of our hidden layer and the number of samples we’ll draw on each iteration:
mnistFeatures :: Int64
mnistFeatures = 784
mnistLabels :: Int64
mnistLabels = 10
numHiddenUnits :: Int64
numHiddenUnits = 1024
sampleSize :: Int
sampleSize = 1000
We’ll also change our model to use Word8
as the result type instead Int64
.
data Model = Model
{ train :: TensorData Float
-> TensorData Word8 -- Used to be Int64
-> Session ()
, errorRate :: TensorData Float
-> TensorData Word8 -- Used to be Int64
-> SummaryTensor
-> Session (Float, ByteString)
}
Now we have to change how we get our input data. Our data isn’t in CSV format this time. We’ll use helper functions from the Tensor Flow library to extract the images and labels:
import TensorFlow.Examples.MNIST.Parse (readMNISTSamples, readMNISTLabels)
…
runDigits :: FilePath -> FilePath -> FilePath -> FilePath -> IO ()
runDigits trainImageFile trainLabelFile testImageFile testLabelFile =
withEventWriter eventsDir $ \eventWriter -> runSession $ do
-- trainingImages, testImages :: [Vector Word8]
trainingImages <- liftIO $ readMNISTSamples trainImageFile
testImages <- liftIO $ readMNISTSamples testImageFile
-- traininglabels, testLabels :: [Word8]
trainingLabels <- liftIO $ readMNISTLabels trainLabelFile
testLabels <- liftIO $ readMNISTLabels testLabelFile
-- trainingRecords, testRecords :: Vector (Vector Word8, Word8)
let trainingRecords = fromList $ zip trainingImages trainingLabels
let testRecords = fromList $ zip testImages testLabels
...
Our “input” type consists of vectors of Word8
elements. These represent the intensity of various pixels. Our “output” type is Word8
, referring to the actual labels (0-9). We read the images and labels from separate files. Then we zip them together to pass to our processing functions. We’ll have to make a few changes to these processing functions for this data set. First, we have to generalize the type of our randomization function:
-- Used to be IrisRecord Specific
chooseRandomRecords :: Vector a -> IO (Vector a)
Next we have to write a new encoding function that will put our data into the TensorData
format. This looks like our old version, except dealing with the new tuple type instead of the IrisRecord
.
convertDigitRecordsToTensorData
:: Vector (Vector Word8, Word8)
-> (TensorData Float, TensorData Word8)
convertDigitRecordsToTensorData records = (input, output)
where
numRecords = Data.Vector.length records
input = encodeTensorData [fromIntegral numRecords, mnistFeatures]
(fromList $ concatMap recordToInputs records)
output = encodeTensorData [fromIntegral numRecords] (snd <$> records)
recordToInputs :: (Vector Word8, Word8) -> [Float]
recordToInputs rec = fromIntegral <$> (toList . fst) rec
And then we just have to substitute our new functions and parameters in, and we’ll be able to run our digit trainer!
Current training error 89.8
Current training error 19.300001
Current training error 13.300001
Current training error 11.199999
Current training error 8.700001
Current training error 6.5999985
Current training error 6.999999
Current training error 5.199999
Current training error 4.400003
Current training error 5.000001
Current training error 2.3000002
test error 6.830001
So our accuracy is 93.2%. This seems like an alright number. But imagine being a Post office and having 6.8% of your mail sorted into the wrong Zip Code! (This was the original use case of this data set). So let’s see if we can do better.
Convolution and Max Pooling
Now we could train our model longer. This will tend to improve our error rate. But we can also help ourselves by making our model more complex. The fundamental flaw with what we’ve got so far is that it doesn’t account for the 2D nature of the images. This means we're losing a ton of useful information. So the first thing we'll do is treat our images as being 28x28 tensors instead of 1x784. This way, our model can pick out specific areas that are significant for the identification of the digit.
One thing we want to account for is that our image might not be in the center of the frame. To account for this, we're going to apply convolution. When using convolution, we break the image into many different overlapping tiles. In our case, we’ll make our strides size “1” in every direction, and we’ll use a patch size of 5x5. So this means we’ll center a 5x5 tile around each different pixel in our image, and then come up with a score for it. That score tells us if this part of the image contains any important information. We can represent this score as a vector with many features.
So with 2D convolution, we'll be dealing with 4-dimensional tensors. The first dimension is the sample size. The second two dimensions are the shape of the image. The final dimension is the number of features of the "score" for each part of the image. So each original image starts our with a single feature for the "score" of each pixel. This score is the actual intensity of that pixel! Then each layer of convolution will act as a mini neural network per pixel, making as many features as we want.
Max pooling is a form of down-sampling. After our first convolution step, we’ll have scores on the 28x28 image. We’ll use 2x2 max-pooling, meaning we divide each image into 2x2 squares. Then we’ll make a new layer that is 14x14, using only the “best” score from each 2x2 box. This makes our model more efficient while keeping the most important information.
Implementing Convolutional Layers
We’ll do two rounds of convolution and max pooling. So we’ll make a function that creates a layer that performs these two steps. This will look a lot like our other neural network layer. We’ll take parameters for the size of the input and output channels of the layer, as well as the tensor itself. So our first step will be to create the weights and bias tensors using these parameters:
patchSize :: Int64
patchSize = 5
buildConvPoolLayer :: Int64 -> Int64 -> Tensor v Float -> Text
-> Build (Variable Float, Variable Float, Tensor Build Float)
buildConvPoolLayer inputChannels outputChannels input layerName = withNameScope layerName $ do
weights <- truncatedNormal (vector weightsShape)
>>= initializedVariable
bias <- truncatedNormal (vector [outputChannels]) >>= initializedVariable
...
where
weightsShape :: [Int64]
weightsShape = [patchSize, patchSize, inputChannels, outputChannels]
Now we’ll want to call our convolution and max pooling functions. These are still a little rough around the edges (the Haskell library is still quite young). The C versions of these functions have many optional, named attributes. For the moment there don’t seem to be any functions that use normal Haskell values for these arguments. Instead, we’ll be using OpAttr
values, assign bytestring names to values.
where
...
convStridesAttr = opAttr "strides" .~ ([1,1,1,1] :: [Int64])
poolStridesAttr = opAttr "strides" .~ ([1,2,2,1] :: [Int64])
poolKSizeAttr = opAttr "ksize" .~ ([1,2,2,1] :: [Int64])
paddingAttr = opAttr "padding" .~ ("SAME" :: ByteString)
dataFormatAttr = opAttr "data_format" .~ ("NHWC" :: ByteString)
convAttrs = convStridesAttr . paddingAttr . dataFormatAttr
poolAttrs = poolKSizeAttr . poolStridesAttr . paddingAttr . dataFormatAttr
The strides
argument for convolution refers to how much we shift the window each time. The strides
argument for pooling refers to how big the windows will be that we perform the pooling over. In this case, it's 2x2. Now that we have our attributes, we can call the library functions conv2D’
and maxPool’
. This gives our resulting vector. We also throw in a call to relu
between these steps.
buildConvPoolLayer :: Int64 -> Int64 -> Tensor v Float -> Text
-> Build (Variable Float, Variable Float, Tensor Build Float)
buildConvPoolLayer inputChannels outputChannels input layerName = withNameScope layerName $ do
weights <- truncatedNormal (vector weightsShape)
>>= initializedVariable
bias <- truncatedNormal (vector [outputChannels]) >>= initializedVariable
let conv = conv2D' convAttrs input (readValue weights)
`add` readValue bias
let results = maxPool' poolAttrs (relu conv)
return (weights, bias, results)
where
...
Modifying our Model
Now we’ll make a few updates to our model and we’ll be in good shape. First, we need to reshape our input data to be 4-dimensional. Then, we’ll apply the two convolution/pooling layers:
imageDimen :: Int32
imageDimen = 28
createModel :: Build Model
createModel = do
let batchSize = -1 -- Allows variable sized batches
let conv1OutputChannels = 32
let conv2OutputChannels = 64
let denseInputSize = 7 * 7 * 64 :: Int32 -- 3136
let numHiddenUnits = 1024
inputs <- placeholder [batchSize, mnistFeatures]
outputs <- placeholder [batchSize]
let inputImage = reshape inputs (vector [batchSize, imageDimen, imageDimen, 1])
(convWeights1, convBiases1, convResults1) <-
buildConvPoolLayer 1 conv1OutputChannels inputImage "convLayer1"
(convWeights2, convBiases2, convResults2) <-
buildConvPoolLayer conv1OutputChannels conv2OutputChannels convResults1 "convLayer2"
Once we’re done with that, we’ll apply two fully-connected (dense) layers as we did before. Note we'll reshape our result from four dimensions back down to two:
let denseInput = reshape convResults2 (vector [batchSize, denseInputSize])
(denseWeights1, denseBiases1, denseResults1) <-
buildNNLayer (fromIntegral denseInputSize) numHiddenUnits denseInput "denseLayer1"
let rectifiedDenseResults = relu denseResults1
(denseWeights2, denseBiases2, denseResults2) <-
buildNNLayer numHiddenUnits mnistLabels rectifiedDenseResults "denseLayer2"
And after that we can treat the rest of the model the same. We'll update the parameter names and add the new weights and biases to the params
that the model can change.
As a review, let’s look at the dimensions of each of the intermediate tensors here. Then we can see the restrictions on the dimensions of the different operations. Each convolution step takes two four-dimensional tensors. The final dimension of argument 1 must match the third dimension of argument 2. Then the result will swap in the final dimension of argument 2. Meanwhile, pooling with a 2x2 stride size will take this resulting 4-dimensional tensor and halve each of the inner dimensions.
input: n x 784
inputImage: n x 28 x 28 x 1
convWeights1: 5 x 5 x 1 x 32
convBias1: 32
conv (first layer): n x 28 x 28 x 32
convResults1: n x 14 x 14 x 32
convWeights2: 5 x 5 x 32 x 64
conv (second layer): n x 14 x 14 x 64
convResults2: n x 7 x 7 x 64
denseInput: n x 3136
denseWeights1: 3136 x 1024
denseBias1: 1024
denseResults1: n x 1024
denseWeights2: 1024 x 10
denseBias2: 10
denseResults2: n x 10
So for each input, we’ll have a probability for all 10 of the possible inputs. We pick the greatest of these as the chosen label.
Results
We’ll run our model again, only this time we’ll use a smaller sample size (100 per training iteration). This allows us to train for more iterations (20000). This takes quite a while to train, but we get these results (printed every 1000 iterations).
Current training error 91.0
Current training error 6.0
Current training error 2.9999971
Current training error 2.9999971
Current training error 0.0
Current training error 0.0
Current training error 0.99999905
Current training error 0.0
Current training error 0.0
Current training error 0.0
Current training error 0.0
Current training error 0.0
Current training error 0.0
Current training error 0.0
Current training error 0.0
Current training error 0.0
Current training error 0.0
Current training error 0.0
Current training error 0.0
Current training error 0.0
Current training error 0.0
test error 1.1799991
Not too bad! Once it got going, we saw very few training errors, though still ended up with a model that was a tad overfit. The Tensor Flow MNIST expert tutorial suggests using a dropout factor. This helps diminish the effects of overfitting. But this option isn’t available to us yet in Haskell. Still, we got close to 99% accuracy, which is a success for us!
And here’s what our final graph looks like. Notice the extra layers we added for convolution:
Conclusion
So that’s it for convolutional neural networks! Our goal was to adapt our previous neural network model to recognize digits. Not only did we pick a harder problem, but we also wanted higher accuracy. We achieved this by using more advanced machine learning techniques. Convolution allowed us to use the full 2x2 nature of the image. It also checked for the digit no matter where it was in the image. Max pooling enabled us to make our algorithm more efficient while keeping the most important information.
If you’re itching to see what else Haskell can do with Tensor Flow, check out our Tensor Flow Guide. It’ll walk you through some of the trickier parts of getting the library working on your local machine. It will also go through the most important information you need to know about the types in this library.
If you’re new to Haskell, here are a couple more resources you can dig into before you try your hand at Tensor Flow. First, there’s our Getting Started Checklist. This will point you toward some helpful resources for learning the language. Next, you can check out our Stack mini-course so you can learn how to organize a project! If you want to use Tensor Flow in Haskell, Stack is a must!