Weekly Course Layout & Learning Journey
The course was meticulously structured into weekly modules, each focusing on a different aspect of Deep Learning. Here’s a brief overview of what I learned each week:
- Week 1: I began with a fascinating overview of the history of deep learning, exploring its success stories and fundamental concepts like the McCulloch Pitts Neuron and the Perceptron Learning Algorithm. This introduction set a solid foundation for the weeks to come. I practised coding simple neural networks to solidify these concepts.
- Week 2: This week, I delved into Multilayer Perceptrons (MLPs), understanding their representation power and learning about sigmoid neurons and gradient descent. The practical exercises on feedforward neural networks were particularly enlightening. I implemented MLPs in Python and experimented with different activation functions.
- Week 3: The focus was on FeedForward Neural Networks and Backpropagation this week. I spent a lot of time practising backpropagation algorithms, which are crucial for training deep networks effectively. I wrote custom backpropagation code to gain a deeper understanding of the learning process.
- Week 4: I explored various optimization techniques, including Gradient Descent (GD), Momentum GD, Nesterov Accelerated GD, Stochastic GD, AdaGrad, RMSProp, and Adam. Additionally, learning about eigenvalues, eigenvectors, and eigenvalue decomposition was quite enriching. I compared the performance of different optimization algorithms on a set of neural network models.
- Week 5: This week, I covered Principal Component Analysis (PCA) and Singular Value Decomposition (SVD). The interpretation of PCA and its applications in reducing dimensionality were particularly useful. I applied PCA to a dataset to visualize the reduction in dimensions and its impact on model performance.
- Week 6: I learned about different types of autoencoders and their relation to PCA. The hands-on sessions with regularization techniques in autoencoders, such as denoising and sparse autoencoders, were very practical. I built and trained autoencoders on image datasets to observe their reconstruction capabilities.
- Week 7: Regularization techniques were the focus this week. I learned about the Bias-Variance Tradeoff, L2 regularization, early stopping, Dataset augmentation and dropout. Implementing these techniques helped me understand how to improve model generalization. I applied dropout and data augmentation to my existing models to see the improvements in generalization.
- Week 8: I covered advanced topics like Greedy Layerwise Pre-training, better activation functions, improved weight initialization methods, and batch normalization for this week. These topics are essential for building deeper and more efficient neural networks. I experimented with different activation functions and initialization methods to enhance model performance.
- Week 9: In this week, Learning Vectorial Representations of Words was the highlight for me. This week was pivotal in understanding how deep learning models handle natural language processing tasks. I implemented word embeddings and used them in simple text classification tasks to understand their impact.
- Week 10: I dove into Convolutional Neural Networks (CNNs), studying architectures like LeNet, AlexNet, ZF-Net, VGGNet, GoogLeNet, and ResNet. The practical insights into visualizing CNNs through guided backpropagation, Deep Dream, and Deep Art were particularly fascinating. I built and trained my own CNN models on image classification tasks. I incorporated CNN and YoloV8 and made Vehicle License Plate Recognition with an accuracy of 80%.
- Week 11: This week, I focused on Recurrent Neural Networks (RNNs) and backpropagation through time (BPTT). Understanding GRU and LSTMs helped me grasp how deep-learning models process sequential data. I created RNN and LSTM models to analyze time-series data and text sequences.
- Week 12: The final week covered Encoder-Decoder Models and the Attention Mechanism, including its application over images. These concepts are fundamental for advanced tasks in machine translation and image captioning. I implemented a simple attention mechanism in a sequence-to-sequence model to see how it improves translation accuracy.
NPTEL Journey For Deep Learning Course Certification
Hey Geeks!
I recently completed the NPTEL course “Deep Learning” by IIT Ropar and I am thrilled to share my experience with you all. This 12-week journey has been incredibly rewarding and has deepened my understanding of one of the most transformative technologies in the modern world. I am proud to say that I cleared the final exam and earned an overall score of 65, securing an Elite Certificate from NPTEL.