Core Concepts

LLM

Large Language Model — a deep learning model trained on vast text data that can understand and generate human-like language.

Full Definition

A Large Language Model (LLM) is a type of deep learning model with billions of parameters, trained on massive text datasets to understand and generate human language. LLMs (such as GPT-4, Claude, Gemini, and Llama) form the cognitive core of modern AI agents, providing the reasoning, language understanding, and generation capabilities that enable autonomous operation. LLMs work by predicting the next token in a sequence based on patterns learned during training, which makes them powerful but also susceptible to hallucinations, biases encoded in training data, and manipulation through prompt injection. In enterprise deployments, LLMs are typically accessed through APIs and wrapped with governance layers that monitor their inputs, outputs, and behavior to ensure safe, compliant, and reliable operation.