Tensorflow for classification¶
How to use tensorflow to make a classification¶
- Getting started with tensorflow
- Variables, Placeholders, matmul, zeros, session
- tf.train.GradientDescentOptimizer
Introduction¶
import pandas as pd
import numpy as np
import tensorflow as tf
Step 1 - Load our data¶
We need to load and cleanup our data.
The data.csv
file looks like this:
csv
index,area,bathrooms,price,sq_price
0,2104.0,3.0,399900.0,190.066539924
1,1600.0,3.0,329900.0,206.1875
2,2400.0,3.0,369000.0,153.75
3,1416.0,2.0,232000.0,163.84180791
4,3000.0,4.0,539900.0,179.966666667
5,1985.0,4.0,299900.0,151.083123426
6,1534.0,3.0,314900.0,205.280312907
7,1427.0,3.0,198999.0,139.452697968
8,1380.0,3.0,212000.0,153.623188406
9,1494.0,3.0,242500.0,162.315930388
10,1940.0,4.0,239999.0,123.710824742
11,2000.0,3.0,347000.0,173.5
12,1890.0,3.0,329999.0,174.602645503
13,4478.0,5.0,699900.0,156.297454221
14,1268.0,3.0,259900.0,204.968454259
15,2300.0,4.0,449900.0,195.608695652
16,1320.0,2.0,299900.0,227.196969697
17,1236.0,3.0,199900.0,161.731391586
18,2609.0,4.0,499998.0,191.643541587
19,3031.0,4.0,599000.0,197.624546354
20,1767.0,3.0,252900.0,143.123938879
21,1888.0,2.0,255000.0,135.063559322
22,1604.0,3.0,242900.0,151.433915212
23,1962.0,4.0,259900.0,132.46687054
24,3890.0,3.0,573900.0,147.532133676
25,1100.0,3.0,249900.0,227.181818182
26,1458.0,3.0,464500.0,318.587105624
27,2526.0,3.0,469000.0,185.669041964
28,2200.0,3.0,475000.0,215.909090909
29,2637.0,3.0,299900.0,113.727720895
30,1839.0,2.0,349900.0,190.266449157
31,1000.0,1.0,169900.0,169.9
32,2040.0,4.0,314900.0,154.362745098
33,3137.0,3.0,579900.0,184.858144724
34,1811.0,4.0,285900.0,157.868580895
35,1437.0,3.0,249900.0,173.903966597
36,1239.0,3.0,229900.0,185.552865214
37,2132.0,4.0,345000.0,161.81988743
38,4215.0,4.0,549000.0,130.24911032
39,2162.0,4.0,287000.0,132.747456059
40,1664.0,2.0,368500.0,221.454326923
41,2238.0,3.0,329900.0,147.408400357
42,2567.0,4.0,314000.0,122.321776393
43,1200.0,3.0,299000.0,249.166666667
44,852.0,2.0,179900.0,211.150234742
45,1852.0,4.0,299900.0,161.933045356
46,1203.0,3.0,239500.0,199.085619285
# Step 1 - Load our data
dataframe = pd.read_csv('data.csv') # dataframe object
# Remove features we don't care about
dataframe = dataframe.drop(['index', 'price', 'sq_price'], axis=1)
# We only use the first 10 rows
# We're using area and bathrooms for our features, you can use others, we'll be using these
dataframe = dataframe[0:10]
dataframe
Step 2 - Add labels¶
We need to add labels to our data so that when we train it we have something to train towards (bad/good = 0/1).
y1 are our original labels, y2 is a negation of that.
# Step 2 - Add labels
# 1 = Good buy
# 0 = Bad buy
dataframe.loc[:, ('y1')] = [1,1,1,0,0,1,0,1,1,1]
# y2 is a negation of y1 - we don't like a house
dataframe.loc[:, ('y2')] = dataframe['y1'] == 0
# turn TRUE/FALSE values into 1s and 0s
dataframe.loc[:, ('y2')] = dataframe['y2'].astype(int)
# Print our dataframe
dataframe
Step 3 - Prepare data for tensorflow¶
We need to convert both our dataframe objects to a matrix for tensorflow.
Tensors are a generic version of vectors and matrices
Vectors are list of numbers (1D tensor)
Matrix is list of list of numbers (2D tensor)
a list of list of list of numbers (3D tensor)
...
# Step 3 - Prepare data for tensorflow
# convert features to input tensor
inputX = dataframe.loc[:, ['area', 'bathrooms']].as_matrix()
# convert labels to input tensors
inputY = dataframe.loc[:, ['y1', 'y2']].as_matrix()
inputX
inputY
Step 4 - Set our hyperparameters¶
# Step 4 - Set our hyperparameters
learning_rate = 0.000001
training_epochs = 2000
display_step = 250
n_samples = inputY.size
Step 5 - Create our computation graph/neural network¶
We need to setup our variables and placeholders.
x
and y_
are our inputs (which we feed in - they are placeholders)
W
and b
hold our matrices with their respective dimensions.
y
is our final value (which we use in our cost function to calculate our mean squared error)
# Step 5 - Create our computation graph/neural network
# for our feature input tensor, none means any numbers of examples
# placeholders are gateways for data into our computation graph
x = tf.placeholder(tf.float32, [None, 2]) # 2 because we have 2 features (a 2d matrix)
# create weights
# 2x2 float matrix that we update through the training process
# variables in tf hold and update parameters, e.g: weights (they are in memory buffers containing tensors)
W = tf.Variable(tf.zeros([2,2]))
# add biases - biases help our model fit better
# b in the y = mx + b
# The bias shifts the line to best fit our data
b = tf.Variable(tf.zeros([2]))
# Multiply our weights by our inputs, first calculation
# weights are how we govern how data flows in out computation graph
# multiply input by weights and add biases
y_values = tf.add(tf.matmul(x, W), b)
# apply softmax (our "activation function") to value we just created - softmax normalizes our value
# it takes our value and converts it to a probability that we can then feed to our output
y = tf.nn.softmax(y_values)
# feed in a matrix of labels
# We have 'x' already which are our features, these are our y labels
y_ = tf.placeholder(tf.float32, [None, 2])
Step 6 - Perform training¶
We need to specify a cost function which will be used in our gradient descent step to be minimized. We're using "mean squared error" to do this.
# Step 6 - Perform training
# Create our cost function, mean squared error
# Calculate the error between our 2 'y' values
# Take the average of the difference in values (our error) - the squared difference
# reduce sum computes the sum of elements across dimensions of a tensor
cost = tf.reduce_sum(tf.pow(y_ - y, 2))/(2*n_samples)
# Gradient descent (computing the partial derivative with respect to our input variables - in our case a set of weights and biases)
# We have this `cost` function, we want to _minimize_ that cost using `GradientDescent` with this `learning_rate` to define how fast we want to do that
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Initialize our variables and tensorflow session
# Initializes all the variables and placeholder objects
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
# Training loop
for i in range(training_epochs):
# Run our session given our optimizer
sess.run(optimizer, feed_dict={x: inputX, y_: inputY})
# Write our debugging logs
if (i % display_step == 0):
cc = sess.run(cost, feed_dict={x: inputX, y_: inputY})
print('Training step:', '%04d' % (i), "cost=", "{:.9f}".format(cc))
print("Optimization Finished!")
training_cost = sess.run(cost, feed_dict={x: inputX, y_: inputY})
print("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b))
# Test our output
# We feed in our inputX and the y1 and y2 values are predicted below.
sess.run(y, feed_dict={x: inputX})