自定义损失函数的神经网络实践

作者: shaneZhang 分类: 机器学习的实践 发布时间: 2018-12-24 14:53
#coding:utf-8
# 对于预测酸奶日销量问题,如果预测销量大于实际销量则会损失成本;
# 如果预测销量小于实际销量则 会损失利润。在实际生活中,往往制造一盒酸奶的成本和销售一盒酸奶的利润是不等价的。
# 因此,需 要使用符合该问题的自定义损失函数。
# 自定义损失函数为:loss = ∑𝑛𝑓(y_, y) 其中,损失定义成分段函数:
# f(y_,y)={𝑃𝑅𝑂𝐹𝐼𝑇∗(𝑦_−𝑦) 𝑦<𝑦_
#         {𝐶𝑂𝑆𝑇∗(𝑦−𝑦_)   𝑦>=𝑦_
# 损失函数表示,若预测结果 y 小于标准答案 y_,损失函数为利润乘以预测结果 y 与标准答案 y_之差;
# 若预测结果 y 大于标准答案 y_,损失函数为成本乘以预测结果 y 与标准答案 y_之差。
# 用 Tensorflow 函数表示为:
# loss = tf.reduce_sum(tf.where(tf.greater(y,y_),COST(y-y_),PROFIT(y_-y)))
# 若酸奶成本为 1 元,酸奶销售利润为 9 元,则制造成本小于酸奶利润,因此希望预测的结果 y 多
#  一些。采用上述的自定义损失函数,训练神经网络模型。

import tensorflow as tf
import numpy as np
BATCH_SIZE= 8
SEED = 23455
COST = 1
PROFIT = 9
rdm = np.random.RandomState(SEED)
X = rdm.rand(32,2)
Y_ = [[x1 + x2 + (rdm.rand()/10.0 - 0.05)] for  (x1,x2) in X]


#定义神经网络的输入、参数和输出
x = tf.placeholder(tf.float32, shape=(None,2))
y_ = tf.placeholder(tf.float32,shape=(None,1))
w1 = tf.Variable(tf.random_normal([2,1],stddev=1,seed=1))
y = tf.matmul(x,w1)

#定义损失函数及反向传播方法
#定义损失函数使得预测少了的损失大,于是模型应该偏向生产多的方向去预测
loss_mse = tf.reduce_sum(tf.where(tf.greater(y,y_),(y-y_)*COST,(y_-y)*PROFIT))
train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss_mse)

#生成会话并训练STEPS轮
with tf.Session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    STEPS = 20000
    for i in range(STEPS):
        start = (i * BATCH_SIZE) % 32
        end = (i * BATCH_SIZE) % 32 + BATCH_SIZE
        sess.run(train_step,feed_dict={x:X[start:end],y_:Y_[start:end]})
        if i % 500 == 0:
            print "After %d training steps,w1 is: "%(i)
            print sess.run(w1)
    print "Final w1 is:\n",sess.run(w1)
上述案例运行输出的结果是:
After 0 training steps,w1 is: 
[[-0.762993 ]
 [ 1.5095658]]
After 500 training steps,w1 is: 
[[1.0235443]
 [1.0463386]]
After 1000 training steps,w1 is: 
[[1.0174844]
 [1.0406483]]
After 1500 training steps,w1 is: 
[[1.0211805]
 [1.0472497]]
After 2000 training steps,w1 is: 
[[1.0179386]
 [1.0412899]]
After 2500 training steps,w1 is: 
[[1.0205938]
 [1.0390677]]
After 3000 training steps,w1 is: 
[[1.0242898]
 [1.0456691]]
After 3500 training steps,w1 is: 
[[1.01823  ]
 [1.0399789]]
After 4000 training steps,w1 is: 
[[1.021926 ]
 [1.0465802]]
After 4500 training steps,w1 is: 
[[1.0245812]
 [1.044358 ]]
After 5000 training steps,w1 is: 
[[1.0185213]
 [1.0386678]]
After 5500 training steps,w1 is: 
[[1.0245652]
 [1.0446368]]
After 6000 training steps,w1 is: 
[[1.0185053]
 [1.0389466]]
After 6500 training steps,w1 is: 
[[1.0222014]
 [1.045548 ]]
After 7000 training steps,w1 is: 
[[1.0161415]
 [1.0398577]]
After 7500 training steps,w1 is: 
[[1.0198376]
 [1.0464591]]
After 8000 training steps,w1 is: 
[[1.0224928]
 [1.0442369]]
After 8500 training steps,w1 is: 
[[1.0174738]
 [1.0473702]]
After 9000 training steps,w1 is: 
[[1.0222716]
 [1.0383747]]
After 9500 training steps,w1 is: 
[[1.0172527]
 [1.041508 ]]
After 10000 training steps,w1 is: 
[[1.0199078]
 [1.0392858]]
After 10500 training steps,w1 is: 
[[1.0236039]
 [1.0458871]]
After 11000 training steps,w1 is: 
[[1.017544 ]
 [1.0401969]]
After 11500 training steps,w1 is: 
[[1.0212401]
 [1.0467982]]
After 12000 training steps,w1 is: 
[[1.0238953]
 [1.044576 ]]
After 12500 training steps,w1 is: 
[[1.0178354]
 [1.0388858]]
After 13000 training steps,w1 is: 
[[1.0215315]
 [1.0454872]]
After 13500 training steps,w1 is: 
[[1.0154716]
 [1.039797 ]]
After 14000 training steps,w1 is: 
[[1.0191677]
 [1.0463983]]
After 14500 training steps,w1 is: 
[[1.0162914]
 [1.0427582]]
After 15000 training steps,w1 is: 
[[1.0189465]
 [1.040536 ]]
After 15500 training steps,w1 is: 
[[1.0216017]
 [1.0383139]]
After 16000 training steps,w1 is: 
[[1.0252978]
 [1.0449152]]
After 16500 training steps,w1 is: 
[[1.0192379]
 [1.039225 ]]
After 17000 training steps,w1 is: 
[[1.022934 ]
 [1.0458263]]
After 17500 training steps,w1 is: 
[[1.0168741]
 [1.0401361]]
After 18000 training steps,w1 is: 
[[1.0205702]
 [1.0467374]]
After 18500 training steps,w1 is: 
[[1.0232253]
 [1.0445153]]
After 19000 training steps,w1 is: 
[[1.0171654]
 [1.038825 ]]
After 19500 training steps,w1 is: 
[[1.0208615]
 [1.0454264]]
Final w1 is:
[[1.020171 ]
 [1.0425103]]

由代码执行结果可知,神经网络最终参数为 w1=1.03, w2=1.05,销量预测结果为 y =1.03*x1 + 1.05*x2。由此可见,采用自定义损失函数预测的结果大于采用均方误差预测的结果,更符合实际需 求。

如果觉得我的文章对您有用,请随意打赏。如果有其他问题请联系博主QQ(909491009)或者下方留言!

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注