AI Glossary
The definitive guide to the terminology powering the Artificial Intelligence revolution.
A
Agent Swarms
A system of multiple autonomous AI agents collaborating to solve complex problems by mimicking biological hive minds.
Deep DiveAgentic Reasoning
A higher-order capability where AI models don't just predict the next token, but use internal loops to verify, correct, and plan their logic before outputting a final answer.
Agentic Workflow
A process where AI models autonomously plan, execute, and iterate on tasks using tools rather than just responding to prompts.
AGI (Artificial General Intelligence)
A theoretical AI system capable of accomplishing any intellectual task that a human being can do.
Alignment
The field of AI safety focused on ensuring AI systems' goals and behaviors match human values and intent.
B
Bayesian Inference
A statistical method that AI models use to update the probability of a hypothesis as more evidence becomes available, core to real-time evolution in models like Gemini 3.1.
C
Chain of Thought (CoT)
A prompting technique encouraging LLMs to break down reasoning into intermediate steps to improve logic accuracy.
Constitutional AI
A method pioneered by Anthropic where a model is trained to follow a set of high-level principles (a 'constitution') to self-correct and maintain safety without constant human supervision.
Context Window
The limit on the amount of text (tokens) an AI model can process and remember in a single conversation.
E
Embeddings
Numerical representations (vectors) of text that capture semantic meaning, allowing computers to understand relationships between words.
Embodied AI
AI systems that are integrated into physical bodies (like robots or drones) and can interact with and learn from the physical world in real-time.
F
Fine-tuning
The process of training a pre-trained model on a smaller, specific dataset to specialize it for a particular task.
Foundation Model
A large-scale model trained on vast data that can be adapted (e.g., via fine-tuning) to a wide range of downstream tasks.
H
Hallucination
When an AI model generates incorrect, nonsensical, or unverifiable information but presents it as fact.
I
In-Context Learning (ICL)
The ability of an AI model to learn to perform a task simply by seeing a few examples within its prompt, without any permanent changes to its weights.
Inference
The stage where a trained model processes live data to make predictions or generate content.
L
Liquid Neural Networks
A new type of neural network architecture that can adapt its parameters continuously over time, making it highly efficient for time-series and real-time sensor data.
LLM (Large Language Model)
A deep learning algorithm that can recognize, summarize, translate, predict, and generate text and other content based on knowledge gained from massive datasets.
LoRA (Low-Rank Adaptation)
A technique for fine-tuning large models efficiently by updating only a small subset of parameters.
M
Model Collapse
A degenerative process where AI models trained on AI-generated data progressively lose quality and diversity.
Model Merging
The technique of combining the weights of two or more fine-tuned models to create a 'hybrid' model that retains the capabilities of all its predecessors.
MoE (Mixture of Experts)
An architecture that uses multiple specialized sub-models (‘experts’) and activates only the relevant ones for each query to save compute.
Multimodal AI
AI capable of processing and generating multiple media types simultaneously (text, images, audio, video).
P
Parameters
The internal variables learned by the model during training; roughly equivalent to the ‘brain cells’ of the AI.
Prompt Engineering
The art of crafting inputs (prompts) to guide Generative AI models to produce optimal outputs.
R
RAG (Retrieval-Augmented Generation)
Enhancing LLM responses by retrieving relevant data from external sources before generating an answer.
Deep DiveRLHF (Reinforcement Learning from Human Feedback)
Training models by using human feedback to reward desired behaviors and punish undesired ones.
S
Semantic Search
Searching based on the meaning and intent of phrases rather than just keyword matching.
Sparse Autoencoders (SAEs)
A technical tool used in mechanistic interpretability to identify and extract the 'features' or concepts being used inside a neural network's hidden layers.
State Space Models (SSM)
An alternative to the Transformer architecture (like Mamba) that can process extremely long sequences of data with linear complexity, solving the 'quadratic memory' problem.
Synthetic Data
Data artificially generated by AI models rather than collected from real-world events.
T
Temperature
A parameter controlling the randomness of an AI's output. Higher temperature = more creative; Lower = more deterministic.
Token
The basic unit of text for an LLM (roughly 0.75 words). Costs and limits are often measured in tokens.
Transformer
The neural network architecture introduced by Google in 2017 that serves as the backbone for modern LLMs like GPT and Claude.
V
Vector Database
A database optimized for storing and querying high-dimensional vectors (embeddings), essential for RAG applications.
W
World Models
AI systems trained to understand and simulate physical reality, allowing them to predict how objects will move and interact in 3D space (e.g. OpenAI Sora or Google Genie).
Z
Zero-shot Learning
The ability of a model to perform a task without being given any specific examples (shots) in the prompt.