How does BERT differ from traditional embeddings in NLP?
- BERT is not suitable for text classification.
- BERT uses pre-trained word vectors, while traditional embeddings do not.
- Traditional embeddings are context-agnostic, while BERT captures contextual information.
- Traditional embeddings are more accurate for NLP tasks.
BERT (Bidirectional Encoder Representations from Transformers) differs from traditional embeddings by capturing contextual information. Traditional embeddings like Word2Vec or GloVe do not consider context, whereas BERT looks at both preceding and following words to understand a word's meaning in context.
Loading...
Related Quiz
- In NLP, what is the challenge of resolving "coreference"?
- In supervised learning, what is the output variable also referred to as?
- In the context of e-commerce, how is AI commonly utilized to enhance customer experience?
- The concept of "_______" in AI systems deals with ensuring that the AI does not perform any unsafe or unintended actions.
- In AI, the challenge of ensuring that the model can quickly adapt to new tasks with minimal data is referred to as the _______ problem.