The relationship between neural network depth and accuracy


#1

Hi:
I am learning to design a simple neural network, and try to identify 28x28 pixel greyscale images in the MNIST dataset.
I find that the neural network depth is not directly proportional to the recognition rate, the four layer neural network recognition rate is lower than one layer neural network.

one layer neural network recognition rate: 92%
two layer neural network recognition rate: 94%
four layer neural network recognition rate: 91.8%

Can someone help me analyze it?


(Please click on the picture to see the complete picture information)

thank you very much!


#2

What software are you using for this?


#3

Anaconda


#4

What is your network for the two layer and the four layer case?


#6

I’m learning the example of handwriting recognition Numbers(0 ~ 9), the paper shows that 1-layer nerual network has a 92% recognition rate, and the 2-layer nerual network has a 97% recognition rate, so I guess there is a higher recognition rate for the four-layer neural network
But my actual test results is that the 1-layer nerual network has a 92% recognition rate, 2-layer nerual network has a 94% recognition rate, 4-layer nerual network has a 91.8% recognition rate which less than 1-layer nerual network . and I couldn’t find the cause

example URL:
https://codelabs.developers.google.com/codelabs/cloud-tensorflow-mnist/#6


#7

How much training on each case. It can be the four layers require more training. No reason to believe each case (1, 2, 4) has the same response to training. Try training 2X on the 4 layer.


#8

This is normal behaviour for neural networks. Adding layers does not guarantee improvements in performance.

There are two main likely culprits in your case:

  • Over-fitting. Adding more layers makes the neural network more powerful and expressive. But this is double-edged. It means the network is more sensitive to noise in the input data and can fit to that.

  • Vanishing gradients. Deeper networks can suffer problems where the training signal does not propagate well to early layers causing training to not complete. There are a few different approaches that can help with this, such as using adaptive optimisers (e.g. Adam).

To improve your score beyond 97% on MNIST dataset, you should probably look into convolutional neural networks (CNNs) as a next step. They avoid over-fitting by reducing number of parameters, taking advantage of the repetition in space of most image problems. CNNs can typically score over 99% in test on MNIST. At that point, MNIST is pretty much done (state of the art is ~99.8% in test with some variants of CNNs), and you need to move on to harder challenges (e.g. ImageNet classification) in order to learn even better machine vision approaches.


#9

ResNet get better with each layer because all layer can see the original data.


#10

Hi edpell
For the above 3 cases, we trained 2,0000 times. and We see that after training 12000 times, the curve of accuracy rate becomes more and more smooth, and the number of training increases will not be working.


thank you!


#11

Hi NeilSlater
I am studying deep learning, so I want to try all kinds of neural networks, and I will continue to study the convolutional neural network.

Is there a way to evaluate the accuracy of different structures before designing neural networks? I can only find out which structure is better after completing the design and test it.

I am a SOC engineer. I can find problems in the system and bottlenecks of the system through simulation. so I can improve the system performance by eliminating bottlenecks.Does the neural network have the corresponding method to find the architecture bottleneck?

thank you very much!:grinning:


#12

Is there a way to evaluate the accuracy of different structures before designing neural networks? I can only find out which structure is better after completing the design and test it.

No, sadly there is no such design stage evaluation in neural networks. Through experience (yours and second-hand through reading the literature) you will get a sense of what works well for some problems. But experimentation is the standard approach to find best models.

There are ways to evaluate machine learning and neural networks to establish likely causes of problems during and after training. For instance, you can compare training cost function (or other metric) with the same measure on a held-out data set - often called the cross-validation set - to check for over-fitting. Over-fitting is likely to be a problem when training set results are much better than cross-validation results. In addition, if you find a problem with over-fitting, there are a variety of regularisation techniques possible which can improve performance of the final trained model.

Any good course on neural networks will teach you cross-validation, how to monitor and attempt to address over-fitting.


#13

Hi NeilSlater
thanks for your response , tf.nn.sigmoid for the multiple neural network don’t have a good performance, the tf.nn.relu have a good performance ,The accuracy increased from 91.8% to 97.6%




#14

Consider the neural network as defined with a fuzzy logic.

Ex. The human interface. Parse a sentence, each word maps to a set of known objects, processes, methods, metaphors, similes, etc… We define the object.

The sentence defines the connection among these objects. A question defines a solution set, or path through the universal set of objects, or a “need” for more information.

Thus the reply, is either a set of suggestions to the query, a method to define a solution, or …

Since this is a neural net, and we have all our knowledge within, a properly defined network, then shall be able to learn when given correct objects for defining Truth or Humor, or evil, o any reply.

Thus the learning algorithm, are a set of objects defined with “purpose”, purpose a weighting function.

Example, with charge all matter may be computed and all responses to i/o with that matter can be computed. I know this is incomplete; however, with a well defined group activity, and properly designed machines, simplification is a known.