An overview of LLMs and their challenges

Large language models are powerful but come with some pitfalls

Phil Siarri
5 min readOct 3, 2023
Image of eye in a wall
Image by Gerd Altmann from Pixabay

Large language models (LLMs) are a type of deep neural network models that can process and generate natural language at a large scale.

They have become increasingly popular and powerful in recent years, achieving spectacular results on various natural language processing (NLP) tasks, such as machine translation, text summarization, question answering, sentiment analysis, etc. Some examples of LLMs are BERT, GPT, T5 and XLNet.

Transformer architecture

LLMs are based on the transformer architecture, which is a novel way of modeling sequential data using self-attention mechanisms. Self-attention allows the model to learn the dependencies and relationships between different parts of the input and output sequences, without relying on recurrent or convolutional layers.

Transformer models can be divided into two types: encoder-only models and encoder-decoder models. Encoder-only models, such as BERT and XLNet, take an input sequence and produce a contextualized representation of it, which can be used for downstream tasks such as classification or extraction. Encoder-decoder models, such as GPT and T5, take an input sequence and generate an output sequence, which…

--

--

Phil Siarri
Phil Siarri

Written by Phil Siarri

Founder of Nuadox | Tech & Innovation Commentator | Digital Strategist | MTL | More about me> linktr.ee/philsiarri

No responses yet