AI Glossary
The definitive guide to the terminology powering the Artificial Intelligence revolution.
A
Agent Swarms
A system of multiple autonomous AI agents collaborating to solve complex problems by mimicking biological hive minds.
Deep DiveAgentic Workflow
A process where AI models autonomously plan, execute, and iterate on tasks using tools rather than just responding to prompts.
AGI (Artificial General Intelligence)
A theoretical AI system capable of accomplishing any intellectual task that a human being can do.
Alignment
The field of AI safety focused on ensuring AI systems' goals and behaviors match human values and intent.
C
Chain of Thought (CoT)
A prompting technique encouraging LLMs to break down reasoning into intermediate steps to improve logic accuracy.
Context Window
The limit on the amount of text (tokens) an AI model can process and remember in a single conversation.
E
Embeddings
Numerical representations (vectors) of text that capture semantic meaning, allowing computers to understand relationships between words.
F
Fine-tuning
The process of training a pre-trained model on a smaller, specific dataset to specialize it for a particular task.
Foundation Model
A large-scale model trained on vast data that can be adapted (e.g., via fine-tuning) to a wide range of downstream tasks.
H
Hallucination
When an AI model generates incorrect, nonsensical, or unverifiable information but presents it as fact.
I
Inference
The stage where a trained model processes live data to make predictions or generate content.
L
LLM (Large Language Model)
A deep learning algorithm that can recognize, summarize, translate, predict, and generate text and other content based on knowledge gained from massive datasets.
LoRA (Low-Rank Adaptation)
A technique for fine-tuning large models efficiently by updating only a small subset of parameters.
M
Model Collapse
A degenerative process where AI models trained on AI-generated data progressively lose quality and diversity.
MoE (Mixture of Experts)
An architecture that uses multiple specialized sub-models (‘experts’) and activates only the relevant ones for each query to save compute.
Multimodal AI
AI capable of processing and generating multiple media types simultaneously (text, images, audio, video).
P
Parameters
The internal variables learned by the model during training; roughly equivalent to the ‘brain cells’ of the AI.
Prompt Engineering
The art of crafting inputs (prompts) to guide Generative AI models to produce optimal outputs.
R
RAG (Retrieval-Augmented Generation)
Enhancing LLM responses by retrieving relevant data from external sources before generating an answer.
Deep DiveRLHF (Reinforcement Learning from Human Feedback)
Training models by using human feedback to reward desired behaviors and punish undesired ones.
S
Semantic Search
Searching based on the meaning and intent of phrases rather than just keyword matching.
Synthetic Data
Data artificially generated by AI models rather than collected from real-world events.
T
Temperature
A parameter controlling the randomness of an AI's output. Higher temperature = more creative; Lower = more deterministic.
Token
The basic unit of text for an LLM (roughly 0.75 words). Costs and limits are often measured in tokens.
Transformer
The neural network architecture introduced by Google in 2017 that serves as the backbone for modern LLMs like GPT and Claude.
V
Vector Database
A database optimized for storing and querying high-dimensional vectors (embeddings), essential for RAG applications.
Z
Zero-shot Learning
The ability of a model to perform a task without being given any specific examples (shots) in the prompt.