I am trying to implement a linear regression model in Tensorflow, with additional constraints (coming from the domain) that the W
and b
terms must be non-negative.
I believe there are a couple of ways to do this.
- We can modify the cost function to penalize negative weights [Lagrangian approach] [See:TensorFlow - best way to implement weight constraints
- We can compute the gradients ourselves and project them on [0, infinity] [Projected gradient approach]
Approach 1: Lagrangian
When I tried the first approach, I would often end up with negative b
.
I had modified the cost function from:
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
to:
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
nn_w = tf.reduce_sum(tf.abs(W) - W)
nn_b = tf.reduce_sum(tf.abs(b) - b)
constraint = 100.0*nn_w + 100*nn_b
cost_with_constraint = cost + constraint
Keeping the coefficient of nn_b
and nn_w
to be very high leads to instability and very high cost.
Here is the complete code.
import numpy as np
import tensorflow as tf
n_samples = 50
train_X = np.linspace(1, 50, n_samples)
train_Y = 10*train_X + 6 +40*np.random.randn(50)
X = tf.placeholder("float")
Y = tf.placeholder("float")
# Set model weights
W = tf.Variable(np.random.randn(), name="weight")
b = tf.Variable(np.random.randn(), name="bias")
# Construct a linear model
pred = tf.add(tf.multiply(X, W), b)
# Gradient descent
learning_rate=0.0001
# Initializing the variables
init = tf.global_variables_initializer()
# Mean squared error
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
nn_w = tf.reduce_sum(tf.abs(W) - W)
nn_b = tf.reduce_sum(tf.abs(b) - b)
constraint = 1.0*nn_w + 100*nn_b
cost_with_constraint = cost + constraint
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_with_constraint)
training_epochs=200
with tf.Session() as sess:
sess.run(init)
# Fit all training data
cost_array = np.zeros(training_epochs)
W_array = np.zeros(training_epochs)
b_array = np.zeros(training_epochs)
for epoch in range(training_epochs):
for (x, y) in zip(train_X, train_Y):
sess.run(optimizer, feed_dict={X: x, Y: y})
W_array[epoch] = sess.run(W)
b_array[epoch] = sess.run(b)
cost_array[epoch] = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
The following is the mean of b
across 10 different runs.
0 -1.101268
1 0.169225
2 0.158363
3 0.706270
4 -0.371205
5 0.244424
6 1.312516
7 -0.069609
8 -1.032187
9 -1.711668
Clearly, the first approach is not optimal. Further, there is a lot of art involved in choosing the coefficient of penalty terms.
Approach 2: Projected gradient
I then thought to use the second approach, which is more guaranteed to work.
gr = tf.gradients(cost, [W, b])
We manually compute the gradients and update the W and b.
with tf.Session() as sess:
sess.run(init)
for epoch in range(training_epochs):
for (x, y) in zip(train_X, train_Y):
W_del, b_del = sess.run(gr, feed_dict={X: x, Y: y})
W = max(0, (W - W_del)*learning_rate) #Project the gradient on [0, infinity]
b = max(0, (b - b_del)*learning_rate) # Project the gradient on [0, infinity]
This approach seems to be very slow.
I am wondering if there is a better way to run the second approach, or guarantee the results with the first approach. Can we somehow allow the optimizer to ensure that the learnt weights are non-negative?
exp(b)
instead ofb
in the prediction ... then the effective bias will always be >0. – WakashanW
andb
should be non-negative. Do you then recommend that we learn the log of bothW
andb
? Doesn't that change Y = W.X + b? – Sedulous