Word Embeddings
Word embedding is an approach in Natural language Processing where raw text gets converted to numbers/vectors. As deep learning models only take numerical input this technique becomes important to process the raw data. It helps in capturing the semantic meaning as well as the context of the words. A real-valued vector with various dimensions represents each word.
There are certain methods of generating word embeddings such as BOW (Bag of words), TF-IDF, Glove, BERT embeddings, etc. The earlier methods only converted the words without extracting the semantic relationship and context. But the recent ones such as BERT embeddings, which is a pre-trained word embedding model capture the full context of the word as well as the semantic relationships of the word within the sentence.
Pre-Trained Word Embedding in NLP
Word Embedding is an important term in Natural Language Processing and a significant breakthrough in deep learning that solved many problems. In this article, we’ll be looking into what pre-trained word embeddings in NLP are.
Table of Content
- Word Embeddings
- Challenges in building word embedding from scratch
- Pre Trained Word Embeddings
- Word2Vec
- GloVe
- BERT Embeddings