Implementing BatchNormalization Class in Keras
In this section, we are going to cover all the steps required to implement Batch Normalization in Keras with help of BatchNormalization Class. Let’s discuss the steps:
Step 1: Importing Libraries
import numpy as np
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense, BatchNormalization
Step 2: Create a dummy dataset
# Generate toy dataset
np.random.seed(0)
X = np.random.randn(1000, 10) # 1000 samples, 10 features
y = np.random.randint(2, size=(1000,)) # Binary labels
Step 3: Define the Model
A sequential model is defined using Sequential()
. It consists of three dense layers. The first two layers have ReLU activation functions and Batch Normalization layers after them, and the final layer has a sigmoid activation function for binary classification.
# Define the model model = Sequential() model.add(Dense(64, input_shape=(10,), activation='relu')) model.add(BatchNormalization()) model.add(Dense(32, activation='relu')) model.add(BatchNormalization()) model.add(Dense(1, activation='sigmoid'))
Step 4: Compiling the Model
# Train the model
model.fit(X_train, y_train, epochs=20, batch_size=32, validation_split=0.1)
Complete Implementation of Batch Normalization using Keras Library
import numpy as np
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense, BatchNormalization
# Generate toy dataset
np.random.seed(0)
X = np.random.randn(1000, 10) # 1000 samples, 10 features
y = np.random.randint(2, size=(1000,)) # Binary labels
# Split dataset into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Define the model
model = Sequential()
model.add(Dense(64, input_shape=(10,), activation='relu'))
model.add(BatchNormalization())
model.add(Dense(32, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(1, activation='sigmoid'))
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
#Print Model Summary
model.summary()
Output:
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 64) 704
batch_normalization (Batch (None, 64) 256
Normalization)
dense_1 (Dense) (None, 32) 2080
batch_normalization_1 (Bat (None, 32) 128
chNormalization)
dense_2 (Dense) (None, 1) 33
=================================================================
Total params: 3201 (12.50 KB)
Trainable params: 3009 (11.75 KB)
Non-trainable params: 192 (768.00 Byte)
Applying Batch Normalization in Keras using BatchNormalization Class
Training deep neural networks presents difficulties such as vanishing gradients and slow convergence. In 2015, Sergey Ioffe and Christian Szegedy introduced Batch Normalization as a powerful technique to tackle these challenges. This article will explore Batch Normalization and how it can be utilized in Keras, a well-known deep-learning framework.