Handling Overfitting in TensorFlow Models

In this section, we are going to mitigate overfitting by incorporating regularization, adding dropout between the dense layers and applying batch normalization after each dropout layer. Let’s handle overfitting in the TensorFlow model using the following steps:

Step 1 :Importing Libraries

import tensorflow as tf
from tensorflow.keras.models 
import Sequentialfrom tensorflow.keras.layers 
import Dense, Dropout, BatchNormalization
from tensorflow.keras.callbacks
import EarlyStoppingfrom sklearn.model_selection 
import train_test_split
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA

Step 2: Generating Sample Data

This block generates random sample data for demonstration purposes. X is a 2D array with 1000 rows and 10 columns of random values between 0 and 1, while y is a 1D array of 1000 random integers (0 or 1).

# Generate sample data
X = np.random.rand(1000, 10)
y = np.random.randint(2, size=(1000,))

Step 3: Splitting Data into Training and Validation Sets

The code splits the dataset into training, validation, and testing sets using the train_test_split function from sklearn.model_selection. 80% of the data is allocated to training/validation combined set, further divided into 75% for training and 25% for validation, while 20% is allocated to the testing set for evaluating the trained model’s performance.

# Split data into training, validation, and testing sets
X_train_val, X_test, y_train_val, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_train_val, y_train_val, test_size=0.25, random_state=42) 

Step 4: Apply PCA for dimensionality reduction

  • PCA: Principal Component Analysis is used to reduce the dimensionality of the feature space.
  • n_components=8: We specify that we want to reduce the dimensionality to 8 principal components.
  • fit_transform: Fits PCA to the training data and transforms both training and validation data to the reduced space.
# Apply PCA for dimensionality reduction
pca = PCA(n_components=8)
X_train_pca = pca.fit_transform(X_train)
X_val_pca = pca.transform(X_val)
X_test_pca = pca.transform(X_test)

Step 5: Building And Evaluating Model Without Regularization Techniques

  1. Model Structure: The neural network uses a Sequential model consisting of three Dense layers: the first with 64 neurons, the second with 32 neurons, both using ReLU activation, and a final layer with 1 neuron using a sigmoid activation for binary classification.
  2. Compilation Settings: It is compiled with the Adam optimizer, using binary_crossentropy as the loss function, and it measures accuracy as a performance metric during training and evaluation.
  3. Early Stopping Callback: An EarlyStopping callback is used to halt training if there’s no improvement in validation loss for 10 consecutive epochs, and it restores the weights from the epoch with the best validation loss.
  4. Training Process: The model is trained using the fit method with features reduced by PCA, a batch size of 32, and validation data provided for monitoring. Training can halt early if the validation loss does not improve, thanks to the early stopping callback.
  5. Evaluation: The model’s final performance is evaluated on a separate test dataset using the evaluate method, returning the final loss and accuracy, which helps assess how well the model generalizes beyond the training data.
model_overfit = Sequential([
    Dense(64, activation='relu', input_shape=(8,)),
    Dense(32, activation='relu'),
    Dense(1, activation='sigmoid')
])

# Compile the model
model_overfit.compile(optimizer='adam',
              loss='binary_crossentropy',
              metrics=['accuracy'])

# Define early stopping callback
early_stopping = EarlyStopping(monitor='val_loss', patience=10, restore_best_weights=True)

# Train the model with early stopping
history_overfit = model_overfit.fit(X_train_pca, y_train, epochs=100, batch_size=32,
                    validation_data=(X_val_pca, y_val), callbacks=[early_stopping], verbose=0)

# Evaluate the model on testing data
loss_overfit, test_accuracy_overfit = model_overfit.evaluate(X_test_pca, y_test)

Step 6: Building and Evaluating the model with regularization

  • Enhanced Model Structure: The model utilizes a Sequential setup with three Dense layers, integrating L2 regularization to penalize large weights and reduce overfitting. The first and second dense layers are regularized using a factor of 0.01.
  • Inclusion of Dropout and Batch Normalization: Between the dense layers, Dropout is applied at a rate of 0.5 to randomly set half of the neurons’ outputs to zero during training, further preventing overfitting. BatchNormalization is used following dropout to stabilize and speed up the training process by normalizing the activations.
  • Model Compilation: The model is compiled with the Adam optimizer and binary_crossentropy as the loss function, suitable for binary classification tasks. It also tracks accuracy as a performance metric.
  • Training with Early Stopping: The model is trained using data reduced by PCA, incorporating early stopping to halt training if there’s no improvement in validation loss for 10 epochs, while restoring the best model weights observed during training.
  • Evaluation on Test Data: Finally, the model is evaluated using a separate test dataset, providing metrics for loss and accuracy to assess how well the model generalizes to new data.

# Build TensorFlow model with regularization, dropout, and batch normalization
model_regularized = Sequential([
    Dense(64, activation='relu', input_shape=(8,), kernel_regularizer=tf.keras.regularizers.l2(0.01)),
    Dropout(0.5),
    BatchNormalization(),
    Dense(32, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.01)),
    Dropout(0.5),
    BatchNormalization(),
    Dense(1, activation='sigmoid')
])

# Compile the model
model_regularized.compile(optimizer='adam',
              loss='binary_crossentropy',
              metrics=['accuracy'])

# Train the model with early stopping
history_regularized = model_regularized.fit(X_train_pca, y_train, epochs=100, batch_size=32,
                    validation_data=(X_val_pca, y_val), callbacks=[early_stopping], verbose=0)

# Evaluate the model on testing data
loss_regularized, test_accuracy_regularized = model_regularized.evaluate(X_test_pca, y_test)

Step 7: Printing the results

The code prints the test loss and accuracy for both models, allowing a direct comparison of how each model performs on unseen data. The first model lacks regularization techniques, potentially leading to overfitting, while the second model includes mechanisms to enhance its ability to generalize by reducing overfitting.

# Print results
print("Model without regularization, dropout, and batch normalization:")
print("Test Loss:", loss_overfit)
print("Test Accuracy:", test_accuracy_overfit)

print("\nModel with regularization, dropout, and batch normalization:")
print("Test Loss:", loss_regularized)
print("Test Accuracy:", test_accuracy_regularized)

Complete Code to handle overfitting in TensorFlow

Python
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, BatchNormalization
from tensorflow.keras.callbacks import EarlyStopping
from sklearn.model_selection import train_test_split
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA

# Generate sample data
X = np.random.rand(1000, 10)
y = np.random.randint(2, size=(1000,))

# Split data into training, validation, and testing sets
X_train_val, X_test, y_train_val, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_train_val, y_train_val, test_size=0.25, random_state=42)

# Apply PCA for dimensionality reduction
pca = PCA(n_components=8)
X_train_pca = pca.fit_transform(X_train)
X_val_pca = pca.transform(X_val)
X_test_pca = pca.transform(X_test)

# Build TensorFlow model without regularization, dropout, and batch normalization
model_overfit = Sequential([
    Dense(64, activation='relu', input_shape=(8,)),
    Dense(32, activation='relu'),
    Dense(1, activation='sigmoid')
])

# Compile the model
model_overfit.compile(optimizer='adam',
              loss='binary_crossentropy',
              metrics=['accuracy'])

# Define early stopping callback
early_stopping = EarlyStopping(monitor='val_loss', patience=10, restore_best_weights=True)

# Train the model with early stopping
history_overfit = model_overfit.fit(X_train_pca, y_train, epochs=100, batch_size=32,
                    validation_data=(X_val_pca, y_val), callbacks=[early_stopping], verbose=0)

# Evaluate the model on testing data
loss_overfit, test_accuracy_overfit = model_overfit.evaluate(X_test_pca, y_test)

# Build TensorFlow model with regularization, dropout, and batch normalization
model_regularized = Sequential([
    Dense(64, activation='relu', input_shape=(8,), kernel_regularizer=tf.keras.regularizers.l2(0.01)),
    Dropout(0.5),
    BatchNormalization(),
    Dense(32, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.01)),
    Dropout(0.5),
    BatchNormalization(),
    Dense(1, activation='sigmoid')
])

# Compile the model
model_regularized.compile(optimizer='adam',
              loss='binary_crossentropy',
              metrics=['accuracy'])

# Train the model with early stopping
history_regularized = model_regularized.fit(X_train_pca, y_train, epochs=100, batch_size=32,
                    validation_data=(X_val_pca, y_val), callbacks=[early_stopping], verbose=0)

# Evaluate the model on testing data
loss_regularized, test_accuracy_regularized = model_regularized.evaluate(X_test_pca, y_test)

# Print results
print("Model without regularization, dropout, and batch normalization:")
print("Test Loss:", loss_overfit)
print("Test Accuracy:", test_accuracy_overfit)

print("\nModel with regularization, dropout, and batch normalization:")
print("Test Loss:", loss_regularized)
print("Test Accuracy:", test_accuracy_regularized)

Output:

Model without regularization, dropout, and batch normalization:
Test Loss: 0.68873131275177
Test Accuracy: 0.5799999833106995

Model with regularization, dropout, and batch normalization:
Test Loss: 0.5037883520126343
Test Accuracy: 0.75099999904632568

The results illustrate the impact of regularization, dropout, and batch normalization on the performance of a neural network model in a binary classification task:

  1. Impact on Test Loss:
    • The model without regularization, dropout, or batch normalization shows a higher test loss of approximately 0.689. This higher loss suggests that the model may be overfitting the training data, leading to poorer performance when faced with new, unseen data (like the test set).
    • The model that includes regularization, dropout, and batch normalization achieves a significantly lower test loss of about 0.504. This improvement indicates that these techniques effectively mitigate overfitting, allowing the model to generalize better to new data.
  2. Impact on Test Accuracy:
    • The first model achieves a test accuracy of about 58%, which is relatively low. This performance is indicative of a model that may not have captured the underlying patterns effectively, potentially due to overfitting on the noise within the training data.
    • Conversely, the regularized model achieves a higher test accuracy of approximately 75%, demonstrating a substantial improvement. This suggests that the model is not only avoiding overfitting but is also better at capturing the relevant patterns that distinguish between the classes in your dataset.
  3. Implications of Regularization Techniques:
    • Regularization (L2), dropout, and batch normalization play critical roles in enhancing the model’s ability to generalize. L2 regularization limits the size of the weights, discouraging complexity unless it significantly benefits performance. Dropout randomly deactivates certain pathways in the network, which helps the model avoid relying too much on any specific neuron; this simulates having a simpler model and promotes robustness. Batch normalization helps in stabilizing the learning process and reducing the number of epochs needed to train the model effectively.


These results underscore the effectiveness of incorporating regularization strategies in neural network models, particularly in tasks where overfitting is a concern. The techniques used in the second model help ensure that it learns in a more balanced and generalizable way, leading to better performance on test data.



How to handle overfitting in TensorFlow models?

Overfitting occurs when a machine learning model learns to perform well on the training data but fails to generalize to new, unseen data. In TensorFlow models, overfitting typically manifests as high accuracy on the training dataset but lower accuracy on the validation or test datasets. This phenomenon happens when the model captures noise or random fluctuations in the training data as if they were genuine patterns, leading to poor performance on unseen data.

Similar Reads

Why Overfitting occurs in TensorFlow Models?

Overfitting can be caused by several factors, including:...

How to Mitigate Overfitting in Tensorflow Models?

Overfitting can be reduced significantly in TensorFlow Models using the following checks:...

Handling Overfitting in TensorFlow Models

In this section, we are going to mitigate overfitting by incorporating regularization, adding dropout between the dense layers and applying batch normalization after each dropout layer. Let’s handle overfitting in the TensorFlow model using the following steps:...