在神经网络训练中,我们常常需要画出loss function的变化图,log日志里会显示每一次迭代的loss function的值,于是我们先把log日志保存为log.txt文档,再利用这个文档来画图。
1,先来产生一个log日志。
import mxnet as mx import numpy as np import os import logging logging.getLogger().setLevel(logging.DEBUG) # Training data logging.basicConfig(filename = os.path.join(os.getcwd(), 'log.txt'), level = logging.DEBUG) # 把log日志保存为log.txt train_data = np.random.uniform(0, 1, [100, 2]) train_label = np.array([train_data[i][0] + 2 * train_data[i][1] for i in range(100)]) batch_size = 1 num_epoch=5 # Evaluation Data eval_data = np.array([[7,2],[6,10],[12,2]]) eval_label = np.array([11,26,16]) train_iter = mx.io.NDArrayIter(train_data,train_label, batch_size, shuffle=True,label_name='lin_reg_label') eval_iter = mx.io.NDArrayIter(eval_data, eval_label, batch_size, shuffle=False) X = mx.sym.Variable('data') Y = mx.sym.Variable('lin_reg_label') fully_connected_layer = mx.sym.FullyConnected(data=X, name='fc1', num_hidden = 1) lro = mx.sym.LinearRegressionOutput(data=fully_connected_layer, label=Y, name="lro") model = mx.mod.Module( symbol = lro , data_names=['data'], label_names = ['lin_reg_label'] # network structure ) model.fit(train_iter, eval_iter, optimizer_params={'learning_rate':0.005, 'momentum': 0.9}, num_epoch=20, eval_metric='mse',) model.predict(eval_iter).asnumpy() metric = mx.metric.MSE() model.score(eval_iter, metric)
上面的代码中logging.basicConfig(filename = os.path.join(os.getcwd(), 'log.txt'), level = logging.DEBUG) # 把log日志保存为log.txt 就是把log日志保存为log.txt文件。
2,log.txt文档如下。
INFO:root:Epoch[0] Train-mse=0.470638 INFO:root:Epoch[0] Time cost=0.047 INFO:root:Epoch[0] Validation-mse=73.642301 INFO:root:Epoch[1] Train-mse=0.082987 INFO:root:Epoch[1] Time cost=0.047 INFO:root:Epoch[1] Validation-mse=41.625072 INFO:root:Epoch[2] Train-mse=0.044817 INFO:root:Epoch[2] Time cost=0.063 INFO:root:Epoch[2] Validation-mse=23.743375 INFO:root:Epoch[3] Train-mse=0.024459 INFO:root:Epoch[3] Time cost=0.063 INFO:root:Epoch[3] Validation-mse=13.511120 INFO:root:Epoch[4] Train-mse=0.013431 INFO:root:Epoch[4] Time cost=0.063 INFO:root:Epoch[4] Validation-mse=7.670062 INFO:root:Epoch[5] Train-mse=0.007408 INFO:root:Epoch[5] Time cost=0.063 INFO:root:Epoch[5] Validation-mse=4.344374 INFO:root:Epoch[6] Train-mse=0.004099 INFO:root:Epoch[6] Time cost=0.063 INFO:root:Epoch[6] Validation-mse=2.455608 INFO:root:Epoch[7] Train-mse=0.002274 INFO:root:Epoch[7] Time cost=0.062 INFO:root:Epoch[7] Validation-mse=1.385449 INFO:root:Epoch[8] Train-mse=0.001263 INFO:root:Epoch[8] Time cost=0.063 INFO:root:Epoch[8] Validation-mse=0.780387 INFO:root:Epoch[9] Train-mse=0.000703 INFO:root:Epoch[9] Time cost=0.063 INFO:root:Epoch[9] Validation-mse=0.438943 INFO:root:Epoch[10] Train-mse=0.000391 INFO:root:Epoch[10] Time cost=0.125 INFO:root:Epoch[10] Validation-mse=0.246581 INFO:root:Epoch[11] Train-mse=0.000218 INFO:root:Epoch[11] Time cost=0.047 INFO:root:Epoch[11] Validation-mse=0.138368 INFO:root:Epoch[12] Train-mse=0.000121 INFO:root:Epoch[12] Time cost=0.047 INFO:root:Epoch[12] Validation-mse=0.077573 INFO:root:Epoch[13] Train-mse=0.000068 INFO:root:Epoch[13] Time cost=0.063 INFO:root:Epoch[13] Validation-mse=0.043454 INFO:root:Epoch[14] Train-mse=0.000038 INFO:root:Epoch[14] Time cost=0.063 INFO:root:Epoch[14] Validation-mse=0.024325 INFO:root:Epoch[15] Train-mse=0.000021 INFO:root:Epoch[15] Time cost=0.063 INFO:root:Epoch[15] Validation-mse=0.013609 INFO:root:Epoch[16] Train-mse=0.000012 INFO:root:Epoch[16] Time cost=0.063 INFO:root:Epoch[16] Validation-mse=0.007610 INFO:root:Epoch[17] Train-mse=0.000007 INFO:root:Epoch[17] Time cost=0.063 INFO:root:Epoch[17] Validation-mse=0.004253 INFO:root:Epoch[18] Train-mse=0.000004 INFO:root:Epoch[18] Time cost=0.063 INFO:root:Epoch[18] Validation-mse=0.002376 INFO:root:Epoch[19] Train-mse=0.000002 INFO:root:Epoch[19] Time cost=0.063 INFO:root:Epoch[19] Validation-mse=0.001327