Losses

Loss functions are a very important thing to notice while creating a neural network because loss functions in the neural network will calculate the difference between the predicted output and the actual result and greatly help the optimizers in the neural nets to update the weights on its backpropagation.

There are many loss functions that were supported by the TensorFlow library, and again commonly used few are,

  1. Mean Absolute
  2. MeanSquaredError
  3. Binary Crossentropy
  4. Categorical Crossentropy
  5. Sparse Categorical Crossentropy

Note: Again the choice of loss is completely dependent on the type of problem and result we expect from the Neural network.

Artificial Neural Network in TensorFlow

In this article, we are going to see some basics of ANN and a simple implementation of an artificial neural network. Tensorflow is a powerful machine learning library to create models and neural networks.

So, before we start What are Artificial neural networks? Here is a simple and clear definition of artificial neural networks. So long story in short artificial neural networks is a technology that mimics a human brain to learn from some key features and classify or predict in the real world. An artificial neural network is composed of numbers of neurons which is compared to the neurons in the human brain.  

It is designed to make a computer learn from small insights and features and make them autonomous to learn from the real world and provide solutions in real-time faster than a human.

A neuron in an artificial neural network, will perform two operations inside it

  • Sum of all weights
  • Activation function

So a basic Artificial neural network will be in a form of,

  1. Input layer – To get the data from the user or a client or a server to analyze and give the result.
  2. Hidden layers – This layer can be in any number and these layers will analyze the inputs with passing through them with different biases, weights, and activation functions to provide an output
  3. Output Layer – This is where we can get the result from a neural network.

So as we know the outline of the neural networks, now we shall move to the important functions and methods that help a neural network to learn correctly from the data.  

Note: Any neural network can learn from the data but without good parameter values a neural network might not able to learn from the data correctly and will not give you the correct result.

Some of the features that determine the quality of our neural network are:

  1. Layers
  2. Activation function
  3. Loss function
  4. Optimizer

Now we shall discuss each one of them in detail, 

The first stage of our model building is:

Python3




# Defining the model
model = keras.Sequential([
    keras.layers.Dense(32, input_shape=(2,), activation='relu'),
    keras.layers.Dense(16, activation = 'relu'),
    keras.layers.Dense(2, activation = 'sigmoid')
])


Similar Reads

Layers

...

Activation function

Layers in a neural network are very important as we saw earlier an artificial neural network consists of 3 layers an input layer, hidden layer, output layer. The input layer consists of the features and values that need to be analyzed inside a neural network. Basically, this is a layer that will read our input features onto an Artificial neural network....

Losses

Activation functions are simply mathematical methods that bring all the values inside a range of 0 to 1 so that it will be very easier for the machine to learn the data in its process of analyzing the data. There are a variety of activation functions that are supported by the Tensor flow. Some of the commonly used functions are,...

Optimizers

...

Epochs

Loss functions are a very important thing to notice while creating a neural network because loss functions in the neural network will calculate the difference between the predicted output and the actual result and greatly help the optimizers in the neural nets to update the weights on its backpropagation....

How to Train a Neural Network with TensorFlow :

Optimizers are a very important thing because this is the function that helps the neural network to change the weights on the backpropagation so that the difference between the actual and predicted result will decrease at a gradual pace and obtain that point where the loss is very minimum and the model is able to predict more accurate results....