The code is from the Pytorch example
https://github.com/pytorch/examples/blob/master/mnist/main.py
I just add some annotations for better understanding the whole process.
First, let’s have a look at the things we need to define before the real training:
- define some default parameter settings:
- batch size
- test batch size
- number of epochs
- learning rate
- cuda setting
- seed setting
- momentum setting
- prepare the date
- into training/testing tensor format, normalize
- into batch
- the network model/achitecture
- layers defination (e.g. conv, fully connected)
- forward layer connection (e.g. relu, max pooling)
- the setting for training
- selection of loss function
- do backpropagation
- update the parameters
- show the batch progress
- the setting for testing
- get the prediction from the model
- compute the loss
- show the correction/accuracy
- show the progress of testing if you have many testing data
- train the model in epoches
Codes are as follows:
1 |
|
That’s it!