Let's Create a Perceptron Using Tensorflow

 There are 5 main steps in training a neural network which is even common for the atomic unit of its perception.

  1. Define the network: This could be a lot of layers of neurons. For this case just a perceptron.
  2. Prepare the training data: Here we will generate a zero-dimensional tensor with 1000 random values.
  3. Define loss and optimizer functions. Optimize use in the training phase to measure and minimize the loss.
  4. Train the model with training data to minimize the loss. This is the step we find for what values of weight and bias the training data will fit the model (Here just like y = mx + c )
  5. Validate the model
The perception consists of two trainable values namely Weight (W) and Bias (b). 

As the loss function, we here going to use the Mean squared error loss.

01 Lets define the Perceptron class


# Define the model 
class Perceptron():
  def __init__(self):
    # initializing the trainable parameters with random values
    self.w = tf.Variable(2.0)
    self.b = tf.Variable(1.0)
  
  # return the linear function for a given value of x
  def __call__(self,x):
    return self.w * x + self.b

Here we initialize the weight and bias with random values. By creating a object from the class with the set of x values it automatically returns the y due  __call__(). 

02. Preparing the trainnig data


# Prepare the training data
TRUE_w = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000

random_xs = tf.random.normal(shape=[NUM_EXAMPLES])
# Here we generate the 1000 dummy values for the ys
ys = (TRUE_w * random_xs) + TRUE_b

Here ys is the dummy training data set. 

03. Define the loss function


# return the mean squared error as the loss function 
def loss(y_truey_pred):
  return tf.reduce_mean(tf.square(y_true - y_pred))

Here we use Mean Squared Error as the loss function

04. Train the model


Here we define the training function to perform training the perceptron.

# define a traing function
def train(perceptron,inputs,outputs,learning_rate):
  with tf.GradientTape() as tape:
    # current loss give the location of the current point
    current_loss = loss(outputs, perceptron(inputs))
  # Desired direction and the magnitude to move the ball/point
  dw, db = tape.gradient(current_loss, [perceptron.w, perceptron.b])

  # update the model variable and direction with the new values
  # or move the ball
  perceptron.w.assign_sub(learning_rate * dw)
  perceptron.b.assign_sub(learning_rate * db)

  return current_loss

# Find how the w and b evolve
perceptron = Perceptron()
# create list to collect history of w and b
list_w, list_b = [],[]
losses = []
epochs = range(15)
for epoch in epochs:
  list_w.append(perceptron.w.numpy())
  list_b.append(perceptron.b.numpy())

  current_loss = train(perceptron,random_xs,ys,learning_rate=0.1)
  losses.append(current_loss)
  print('Epoch %2d:, w=%1.2f, b=%1.2f, loss=%2.5f'
         % (epoch, list_w[-1], list_b[-1], current_loss))

The trainnig will converge the values of w and b towards the real w and real b.

05. Evaluating the model

# Evaluate the predictions made by the model using random test data
test_inputs = tf.random.normal(shape=[NUM_EXAMPLES])
test_outputs = test_inputs * TRUE_w + TRUE_b

predicted_test_outputs = perceptron(test_inputs)
plot_data(test_inputs,test_outputs,predicted_outputs=predicted_test_outputs)







Here you can play with code in the Google Colabs:



























Comments

Popular posts from this blog

The Fractals, Infinity, Universe and Measurement Error

Intro to Deep Learning and Tensorflow Basics