1. Home

Anil

< Blog />

Tensorflow for classification

in

How to use tensorflow to make a classification

Overview
  • Getting started with tensorflow
  • Variables, Placeholders, matmul, zeros, session
  • tf.train.GradientDescentOptimizer

Introduction

In [1]:
import pandas as pd
import numpy as np
import tensorflow as tf

Step 1 - Load our data

We need to load and cleanup our data.

The data.csv file looks like this:

csv
index,area,bathrooms,price,sq_price
0,2104.0,3.0,399900.0,190.066539924
1,1600.0,3.0,329900.0,206.1875
2,2400.0,3.0,369000.0,153.75
3,1416.0,2.0,232000.0,163.84180791
4,3000.0,4.0,539900.0,179.966666667
5,1985.0,4.0,299900.0,151.083123426
6,1534.0,3.0,314900.0,205.280312907
7,1427.0,3.0,198999.0,139.452697968
8,1380.0,3.0,212000.0,153.623188406
9,1494.0,3.0,242500.0,162.315930388
10,1940.0,4.0,239999.0,123.710824742
11,2000.0,3.0,347000.0,173.5
12,1890.0,3.0,329999.0,174.602645503
13,4478.0,5.0,699900.0,156.297454221
14,1268.0,3.0,259900.0,204.968454259
15,2300.0,4.0,449900.0,195.608695652
16,1320.0,2.0,299900.0,227.196969697
17,1236.0,3.0,199900.0,161.731391586
18,2609.0,4.0,499998.0,191.643541587
19,3031.0,4.0,599000.0,197.624546354
20,1767.0,3.0,252900.0,143.123938879
21,1888.0,2.0,255000.0,135.063559322
22,1604.0,3.0,242900.0,151.433915212
23,1962.0,4.0,259900.0,132.46687054
24,3890.0,3.0,573900.0,147.532133676
25,1100.0,3.0,249900.0,227.181818182
26,1458.0,3.0,464500.0,318.587105624
27,2526.0,3.0,469000.0,185.669041964
28,2200.0,3.0,475000.0,215.909090909
29,2637.0,3.0,299900.0,113.727720895
30,1839.0,2.0,349900.0,190.266449157
31,1000.0,1.0,169900.0,169.9
32,2040.0,4.0,314900.0,154.362745098
33,3137.0,3.0,579900.0,184.858144724
34,1811.0,4.0,285900.0,157.868580895
35,1437.0,3.0,249900.0,173.903966597
36,1239.0,3.0,229900.0,185.552865214
37,2132.0,4.0,345000.0,161.81988743
38,4215.0,4.0,549000.0,130.24911032
39,2162.0,4.0,287000.0,132.747456059
40,1664.0,2.0,368500.0,221.454326923
41,2238.0,3.0,329900.0,147.408400357
42,2567.0,4.0,314000.0,122.321776393
43,1200.0,3.0,299000.0,249.166666667
44,852.0,2.0,179900.0,211.150234742
45,1852.0,4.0,299900.0,161.933045356
46,1203.0,3.0,239500.0,199.085619285
In [2]:
# Step 1 - Load our data
dataframe = pd.read_csv('data.csv') # dataframe object

# Remove features we don't care about
dataframe = dataframe.drop(['index', 'price', 'sq_price'], axis=1)

# We only use the first 10 rows
# We're using area and bathrooms for our features, you can use others, we'll be using these
dataframe = dataframe[0:10]
dataframe
Out[2]:
area bathrooms
0 2104.0 3.0
1 1600.0 3.0
2 2400.0 3.0
3 1416.0 2.0
4 3000.0 4.0
5 1985.0 4.0
6 1534.0 3.0
7 1427.0 3.0
8 1380.0 3.0
9 1494.0 3.0

Step 2 - Add labels

We need to add labels to our data so that when we train it we have something to train towards (bad/good = 0/1).

y1 are our original labels, y2 is a negation of that.

In [3]:
# Step 2 - Add labels
# 1 = Good buy
# 0 = Bad buy
dataframe.loc[:, ('y1')] = [1,1,1,0,0,1,0,1,1,1]

# y2 is a negation of y1 - we don't like a house
dataframe.loc[:, ('y2')] = dataframe['y1'] == 0

# turn TRUE/FALSE values into 1s and 0s
dataframe.loc[:, ('y2')] = dataframe['y2'].astype(int)

# Print our dataframe
dataframe
Out[3]:
area bathrooms y1 y2
0 2104.0 3.0 1 0
1 1600.0 3.0 1 0
2 2400.0 3.0 1 0
3 1416.0 2.0 0 1
4 3000.0 4.0 0 1
5 1985.0 4.0 1 0
6 1534.0 3.0 0 1
7 1427.0 3.0 1 0
8 1380.0 3.0 1 0
9 1494.0 3.0 1 0

Step 3 - Prepare data for tensorflow

We need to convert both our dataframe objects to a matrix for tensorflow.

Tensors are a generic version of vectors and matrices
Vectors are list of numbers (1D tensor)
Matrix is list of list of numbers (2D tensor)
a list of list of list of numbers (3D tensor)
...

In [4]:
# Step 3 - Prepare data for tensorflow

# convert features to input tensor
inputX = dataframe.loc[:, ['area', 'bathrooms']].as_matrix()

# convert labels to input tensors
inputY = dataframe.loc[:, ['y1', 'y2']].as_matrix()

inputX
/Users/anillakhman/.virtualenvs/docs/lib/python3.6/site-packages/ipykernel_launcher.py:4: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
  after removing the cwd from sys.path.
/Users/anillakhman/.virtualenvs/docs/lib/python3.6/site-packages/ipykernel_launcher.py:7: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
  import sys
Out[4]:
array([[2.104e+03, 3.000e+00],
       [1.600e+03, 3.000e+00],
       [2.400e+03, 3.000e+00],
       [1.416e+03, 2.000e+00],
       [3.000e+03, 4.000e+00],
       [1.985e+03, 4.000e+00],
       [1.534e+03, 3.000e+00],
       [1.427e+03, 3.000e+00],
       [1.380e+03, 3.000e+00],
       [1.494e+03, 3.000e+00]])
In [5]:
inputY
Out[5]:
array([[1, 0],
       [1, 0],
       [1, 0],
       [0, 1],
       [0, 1],
       [1, 0],
       [0, 1],
       [1, 0],
       [1, 0],
       [1, 0]])

Step 4 - Set our hyperparameters

In [6]:
# Step 4 - Set our hyperparameters
learning_rate = 0.000001
training_epochs = 2000
display_step = 250
n_samples = inputY.size

Step 5 - Create our computation graph/neural network

We need to setup our variables and placeholders.

x and y_ are our inputs (which we feed in - they are placeholders)

W and b hold our matrices with their respective dimensions.

y is our final value (which we use in our cost function to calculate our mean squared error)

In [7]:
# Step 5 - Create our computation graph/neural network
# for our feature input tensor, none means any numbers of examples
# placeholders are gateways for data into our computation graph
x = tf.placeholder(tf.float32, [None, 2]) # 2 because we have 2 features (a 2d matrix)

# create weights
# 2x2 float matrix that we update through the training process
# variables in tf hold and update parameters, e.g: weights (they are in memory buffers containing tensors)
W = tf.Variable(tf.zeros([2,2]))

# add biases - biases help our model fit better
# b in the y = mx + b
# The bias shifts the line to best fit our data
b = tf.Variable(tf.zeros([2]))

# Multiply our weights by our inputs, first calculation
# weights are how we govern how data flows in out computation graph
# multiply input by weights and add biases
y_values = tf.add(tf.matmul(x, W), b)

# apply softmax (our "activation function") to value we just created - softmax normalizes our value
# it takes our value and converts it to a probability that we can then feed to our output
y = tf.nn.softmax(y_values)

# feed in a matrix of labels
# We have 'x' already which are our features, these are our y labels
y_ = tf.placeholder(tf.float32, [None, 2])

Step 6 - Perform training

We need to specify a cost function which will be used in our gradient descent step to be minimized. We're using "mean squared error" to do this.

In [8]:
# Step 6 - Perform training
# Create our cost function, mean squared error
# Calculate the error between our 2 'y' values
# Take the average of the difference in values (our error) - the squared difference
# reduce sum computes the sum of elements across dimensions of a tensor
cost = tf.reduce_sum(tf.pow(y_ - y, 2))/(2*n_samples)

# Gradient descent (computing the partial derivative with respect to our input variables - in our case a set of weights and biases)
# We have this `cost` function, we want to _minimize_ that cost using `GradientDescent` with this `learning_rate` to define how fast we want to do that
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
WARNING: Logging before flag parsing goes to stderr.
W0921 10:14:00.080093 4503080384 deprecation.py:323] From /Users/anillakhman/.virtualenvs/docs/lib/python3.6/site-packages/tensorflow/python/ops/math_grad.py:1205: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
In [9]:
# Initialize our variables and tensorflow session

# Initializes all the variables and placeholder objects
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
In [10]:
# Training loop
for i in range(training_epochs):
    # Run our session given our optimizer
    sess.run(optimizer, feed_dict={x: inputX, y_: inputY})
    
    # Write our debugging logs
    if (i % display_step == 0):
        cc = sess.run(cost, feed_dict={x: inputX, y_: inputY})
        print('Training step:', '%04d' % (i), "cost=", "{:.9f}".format(cc))
        
print("Optimization Finished!")
training_cost = sess.run(cost, feed_dict={x: inputX, y_: inputY})
print("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b))
Training step: 0000 cost= 0.114958666
Training step: 0250 cost= 0.109539665
Training step: 0500 cost= 0.109539330
Training step: 0750 cost= 0.109538995
Training step: 1000 cost= 0.109538652
Training step: 1250 cost= 0.109538309
Training step: 1500 cost= 0.109537959
Training step: 1750 cost= 0.109537624
Optimization Finished!
Training cost= 0.109537296 W= [[ 2.1414936e-04 -2.1415015e-04]
 [ 5.1274808e-05 -5.1274797e-05]] b= [ 1.19155166e-05 -1.19155275e-05]
In [11]:
# Test our output
# We feed in our inputX and the y1 and y2 values are predicted below.
sess.run(y, feed_dict={x: inputX})
Out[11]:
array([[0.7112523 , 0.28874776],
       [0.66498977, 0.33501023],
       [0.73657656, 0.26342347],
       [0.6471879 , 0.3528121 ],
       [0.78335613, 0.21664388],
       [0.7006948 , 0.29930523],
       [0.6586633 , 0.34133676],
       [0.6482863 , 0.35171372],
       [0.6436828 , 0.35631716],
       [0.6548012 , 0.34519887]], dtype=float32)