Zero-Shot Learning
Prompt EngineeringAn LLM's ability to perform a task described in natural language without any prior examples or demonstrations in the prompt. Zero-shot capability emerges from large-scale pre-training — the model generalizes from patterns learned during training to new, unseen tasks. Modern frontier models (GPT-4, Claude 3.5) are highly capable zero-shot performers across diverse tasks, though few-shot examples still improve performance on specialized domains.
Zero-Day AI Exploits
Ethics & SafetyNovel, previously unknown attack vectors targeting AI systems — prompt injections that bypass guardrails, adversarial inputs that cause misclassification, data poisoning attacks that corrupt model behavior, or jailbreaks that haven't been patched by safety training. As AI deployment scales, AI-specific zero-day vulnerabilities represent a growing cybersecurity frontier, prompting AI red-teaming programs at major labs (Anthropic, OpenAI, Google DeepMind) and emerging AI security startups.
Zero Temperature Decoding
LLMA generation setting where temperature is set to 0, making the model choose the highest-probability token at each step. This yields highly deterministic and repeatable outputs, which is useful for structured extraction, code generation, and regression-style evaluation tasks.
Zero-Latency Inference (Target)
Generative AIA practical engineering target in AI product design where responses feel instant to users through aggressive optimization, token streaming, caching, and lightweight routing. While literal zero latency is impossible, reducing perceived latency is critical for conversational UX quality.
Zero Trust (AI Security)
InfrastructureA security model that assumes no implicit trust between users, services, models, or data stores. In AI systems, zero-trust architecture enforces strict identity verification, least-privilege access, and continuous authorization checks for model endpoints, vector stores, and tool integrations.
Z-Score Normalization
ML FundamentalsA feature scaling method that transforms values to have mean 0 and standard deviation 1. Standardization improves optimization stability and convergence for many machine learning models, especially when input features are on different numeric scales.
Zipf's Law (Language Data)
ML FundamentalsAn empirical law stating that word frequency in natural language is inversely proportional to rank. Zipfian distributions shape tokenizer design, long-tail vocabulary behavior, and sampling efficiency in language model training corpora.
Zebra Prompting
Prompt EngineeringAn informal prompting pattern where contrasting examples are alternated to force clearer model boundaries (e.g., correct vs incorrect outputs). This style can improve consistency for classification and policy-constrained generation tasks.
Zonal Agent Routing
AgentsA deployment strategy that routes agent requests by region, tenant, or compliance boundary to specific infrastructure zones. Zonal routing reduces latency and helps satisfy data residency requirements in enterprise AI systems.
Zero-Retention Mode
InfrastructureA provider configuration where prompts and outputs are not stored for long-term model training or analytics. Zero-retention modes are used by regulated teams handling sensitive workloads and strict internal privacy requirements.
Zero-Knowledge Proofs (AI Provenance)
Ethics & SafetyCryptographic methods that allow one party to prove a claim without revealing the underlying secret. In AI ecosystems, zero-knowledge proofs are explored for model provenance, secure identity, and verifiable claims about training or inference without exposing sensitive data.
Zoom-Out Prompting
Generative AIA prompt technique that first asks the model for higher-level strategy before details. By expanding context and goals up front, zoom-out prompting can produce more coherent plans and reduce local optimization errors in complex tasks.
Z-Buffer (Vision/Graphics)
Computer VisionA depth-buffering technique used in graphics pipelines to determine visible surfaces by storing depth values per pixel. In vision-adjacent workflows, depth maps and z-buffer concepts support 3D scene reconstruction and synthetic data generation.
Zig-Zag Optimization
ML FundamentalsA colloquial description of unstable gradient updates that oscillate around minima instead of converging smoothly. Common fixes include adaptive optimizers, momentum, better feature scaling, and improved learning-rate schedules.
Zstandard (Zstd) Compression
InfrastructureA high-performance compression algorithm widely used to package model artifacts, logs, and datasets. Zstd can reduce transfer times and storage costs for checkpoints, evaluation traces, and intermediate preprocessing outputs.
Zettelkasten-Style Agent Memory
AgentsA memory strategy inspired by linked-note systems, where an agent stores short atomic notes connected by references. This improves recall, traceability, and long-horizon planning compared to monolithic conversation histories.
"Zero Bias" Claim (AI)
Ethics & SafetyA marketing claim that should be treated skeptically. Because all models inherit assumptions from data and objectives, practical fairness work focuses on measurable bias reduction, transparency, and continuous evaluation rather than absolute claims.
Zap-Based AI Automation
AI Tools & AppsWorkflow automation patterns where AI steps are embedded in trigger-action chains (often called zaps). These patterns connect LLM summarization, classification, and extraction to operational tools like CRMs, docs, and communication channels.
Zero-Shot Classification
LLMA classification approach where a model predicts labels it was not explicitly trained on by leveraging semantic understanding and natural-language label descriptions. Useful for fast taxonomy creation when labeled datasets are limited.
"Zoom and Enhance" (AI Reality)
Generative AIA phrase from media fiction often misapplied to AI imaging. Super-resolution models can improve perceptual detail, but they cannot reliably recover ground-truth information absent from the original signal.
Z-Pattern Prompt Layout
Prompt EngineeringA prompt structuring heuristic that places objective, constraints, context, and output format in a deliberate reading order to reduce ambiguity. Clear layout can materially improve output consistency in multi-requirement prompts.
Z-Test (Model Evaluation)
ML FundamentalsA statistical test used to evaluate whether observed differences in model metrics are likely due to chance under assumptions about variance and sample size. It helps teams avoid over-interpreting small benchmark improvements.