RESOURCES

AI Glossary

The acronyms you'll actually encounter—organized from foundational concepts to specialized terms, with explanations of why each matters.

Reference guide

This glossary covers the terminology you'll see in product documentation, news articles, and conversations about AI. Each entry explains not just what something is, but why it matters for practical use.


Foundational Concepts

AI (Artificial Intelligence)
The broad field of creating systems that can perform tasks typically requiring human intelligence—learning, reasoning, problem-solving, perception. When people say "AI" today, they usually mean generative AI or LLMs specifically, but the term encompasses everything from chess engines to self-driving cars.
ML (Machine Learning)
A subset of AI where systems learn patterns from data rather than following explicit programming rules. Instead of telling a computer "if X, then Y," you show it thousands of examples and let it figure out the patterns. Most modern AI is built on ML techniques.
DL (Deep Learning)
A subset of machine learning using neural networks with multiple layers (hence "deep"). These architectures excel at finding complex patterns in large datasets—the breakthrough that enabled modern AI capabilities like image recognition and natural language understanding.
GenAI (Generative AI)
AI systems that create new content—text, images, audio, video, code—rather than just analyzing or classifying existing data. ChatGPT, Claude, Midjourney, and DALL-E are all generative AI tools. This is the category driving most current AI adoption.
AGI (Artificial General Intelligence)
A hypothetical AI system with human-level reasoning across all domains—not just narrow expertise in specific tasks. Current AI systems are "narrow AI" (excellent at specific things, unable to generalize). AGI remains a research goal, not a current reality. Claims about AGI timelines vary wildly and should be viewed skeptically.
Reasoning Models
A new category of AI models designed to "think" through problems step-by-step before responding. OpenAI's o1 and o3 models, Claude's extended thinking mode, and Google's Gemini 2.0 Flash Thinking exemplify this approach. They trade speed for accuracy on complex tasks by using more compute at inference time.
Inference-Time Compute (Test-Time Compute)
The computational resources used when a model generates a response, as opposed to during training. Reasoning models use significantly more inference-time compute to "think longer" about difficult problems. A major 2025 research direction—some argue it's as important as scaling training data.

Language Models and Architecture

LLM (Large Language Model)
Neural networks trained on massive text datasets to understand and generate human language. "Large" refers to billions of parameters (adjustable values the model learned during training). GPT-4, Claude, Gemini, and Llama are all LLMs. These models power most text-based AI tools you'll encounter.
SLM (Small Language Model)
Smaller, more efficient language models designed to run on limited hardware or for specific tasks. Typically millions rather than billions of parameters. Useful when you need speed, privacy (local processing), or can't afford cloud API costs.
VLM (Vision Language Model)
Models that process both images and text, understanding relationships between visual and linguistic information. GPT-4 with vision, Claude's image analysis, and Gemini are VLMs. Enable features like describing images, analyzing charts, or discussing visual content.
Transformer
The neural network architecture underlying virtually all modern LLMs. Introduced in 2017, it uses "self-attention" to process relationships between all parts of input text simultaneously rather than sequentially. The "T" in GPT, BERT, and many other model names stands for Transformer.
GPT (Generative Pre-trained Transformer)
OpenAI's family of language models—GPT-3, GPT-4, etc. The name describes the architecture: generative (creates text), pre-trained (learned from massive data before fine-tuning), transformer (the underlying architecture). Often used generically to mean any LLM, though technically it's OpenAI's product line.
BERT (Bidirectional Encoder Representations from Transformers)
A Google model architecture that reads text in both directions simultaneously—understanding context from what comes before and after each word. Primarily used for understanding tasks (classification, search) rather than generation. The architecture behind much of Google Search's language understanding.
Parameters
The adjustable numerical values within a neural network that the model learns during training. More parameters generally mean more capacity to learn patterns, but also more computational cost. GPT-4 reportedly has over a trillion parameters; Claude and other frontier models are similar scale.
Weights
Essentially synonymous with parameters in common usage—the numerical values that define how a model processes information. "Model weights" refers to the complete set of learned parameters that make up a trained model.

How You Interact with AI

Prompt
Any input you give to an AI model—a question, instruction, document, or combination. The quality and structure of your prompt significantly affects output quality. "Prompting" has become a skill category unto itself.
System Prompt
Instructions set by developers that establish an AI's baseline behavior, persona, or constraints—typically invisible to end users. When you use ChatGPT, a system prompt tells it to be helpful and follow safety guidelines. System prompts shape the AI's "personality" and capabilities within a given application.
Context Window
The maximum amount of text an LLM can process at once—both your input and its output combined. Measured in tokens (roughly 3/4 of a word). GPT-4 Turbo has a 128K context window (~100,000 words); Claude's standard context is 200K tokens, with some models supporting up to 1M tokens. Larger context windows enable working with longer documents and conversations.
Token
The basic unit LLMs use to process text—typically word fragments rather than whole words. "Tokenization" converts text into these chunks. Understanding tokens matters for: context window limits, API pricing (often per-token), and why character counts sometimes behave strangely (the model doesn't "see" individual characters).
Temperature
A setting controlling how "creative" or "random" model outputs are. Low temperature (0-0.3) produces more predictable, conservative responses; high temperature (0.7-1.0) produces more varied, creative outputs. Useful for different tasks—low for factual retrieval, higher for creative writing.
Hallucination
When an AI generates plausible-sounding but factually incorrect information with apparent confidence. Named because the model "sees" patterns that aren't there. A fundamental limitation of current LLMs—they optimize for coherent language, not truth. Always verify important facts.
Tool Use (Function Calling)
The ability for AI models to interact with external tools, APIs, and services. Instead of just generating text, the model can search the web, run code, query databases, or control other software. Enables AI to take actions, not just provide information. Foundation for AI agents.
MCP (Model Context Protocol)
Anthropic's open protocol that standardizes how AI applications connect to external data sources and tools. Think of it as a universal adapter—instead of building custom integrations for each tool, MCP provides a common interface. Gaining adoption across AI tools in 2025.
Artifacts
Interactive outputs that Claude can create—code, documents, diagrams, or applications—that appear in a separate panel and can be edited, copied, or downloaded. Enables Claude to produce working prototypes, not just descriptions. A key feature for vibe coding and rapid prototyping.
Extended Thinking
Claude's reasoning mode where the model shows its step-by-step thinking process before providing a final answer. Uses more inference-time compute to work through complex problems. Particularly useful for math, logic, and multi-step reasoning tasks.

Prompting Techniques

Zero-shot Prompting
Asking an AI to perform a task without providing examples—just instructions. "Translate this to French: Hello, how are you?" The model applies its training directly without additional context.
Few-shot Prompting (In-Context Learning)
Providing several examples of the desired input/output format before your actual request. The model learns the pattern from your examples and applies it. Often dramatically improves performance on structured tasks.
Chain-of-Thought (CoT) Prompting
Asking the model to show its reasoning step-by-step rather than jumping to conclusions. Adding "Let's think through this step by step" can significantly improve performance on complex reasoning tasks. The model's explicit reasoning also makes errors easier to catch.
Prompt Engineering
The practice of systematically designing and refining prompts to get better AI outputs. Involves understanding how models interpret instructions, what context helps, and how to structure requests for optimal results. More art than science, but with learnable patterns.
Prompt Chaining
Breaking complex tasks into sequential steps, where each prompt builds on previous outputs. Instead of one massive prompt, you guide the model through a workflow. Reduces errors and gives you checkpoints to verify quality.

Model Training and Customization

Pre-training
The initial, massive training phase where a model learns general language patterns from vast text datasets. Pre-training gives models their foundational capabilities. OpenAI, Anthropic, Google, and others invest millions in pre-training runs that take months on thousands of GPUs.
Fine-tuning
Additional training on a smaller, task-specific dataset to adapt a pre-trained model for particular uses. A hospital might fine-tune a model on medical records; a company might fine-tune on their internal documentation. Less expensive than pre-training from scratch.
RLHF (Reinforcement Learning from Human Feedback)
A training technique where humans rate model outputs, and these ratings guide further learning. Used to align models with human preferences—making them more helpful, less harmful, and more honest. A key technique that made ChatGPT dramatically more usable than its predecessors.
Alignment
The goal of making AI systems behave according to human values and intentions. "Misaligned" AI might technically follow instructions while violating their spirit, or optimize for metrics that don't match what humans actually want. Central concern in AI safety research.
LoRA (Low-Rank Adaptation)
A technique for fine-tuning large models efficiently by training a small set of adapter weights rather than modifying the entire model. Makes customization feasible for organizations without massive computing budgets.
RAG (Retrieval-Augmented Generation)
A technique where the model retrieves relevant information from a knowledge base before generating responses. Rather than relying solely on what the model "memorized" during training, RAG lets it access current, specific data. Powers many enterprise AI applications and reduces hallucinations.
Embeddings
Numerical representations of text that capture semantic meaning—similar concepts have similar embeddings. Enables semantic search (finding relevant content by meaning, not just keywords) and are fundamental to RAG systems.
Vector Database
Databases optimized for storing and searching embeddings. When you ask an AI about your company's documents, a vector database helps find relevant passages quickly. Pinecone, Weaviate, and Chroma are common examples.
RLAIF (Reinforcement Learning from AI Feedback)
Similar to RLHF, but using AI-generated feedback instead of human ratings to guide training. More scalable than human feedback. Anthropic's Constitutional AI uses this approach—Claude evaluates its own outputs against principles.
DPO (Direct Preference Optimization)
A simpler alternative to RLHF that directly optimizes models on preference data without needing a separate reward model. Increasingly popular for fine-tuning because it's more stable and computationally efficient.
Distillation
Transferring knowledge from a large "teacher" model to a smaller "student" model. The smaller model learns to mimic the larger model's outputs, achieving similar performance at lower cost. How many efficient local models are created.

AI Applications and Agents

NLP (Natural Language Processing)
The field of AI focused on enabling computers to understand, interpret, and generate human language. LLMs represent the current state-of-the-art in NLP. Encompasses tasks like translation, summarization, sentiment analysis, and question-answering.
NLU (Natural Language Understanding)
A subset of NLP specifically focused on comprehension—extracting meaning, intent, and entities from text. Distinguished from generation; NLU is about understanding what someone means.
Chatbot
An AI system designed for conversational interaction. Modern chatbots (ChatGPT, Claude, Gemini) use LLMs for open-ended conversation; older chatbots used simpler rules or intent-matching. The interface through which most people experience AI.
AI Agent
An AI system that can take actions autonomously—browsing the web, executing code, calling APIs, managing files—rather than just generating text. The frontier of current AI development. Agents chain together planning, tool use, and execution to accomplish complex goals.
Agentic AI
AI systems designed for autonomous, multi-step task completion with minimal human intervention. More proactive than traditional assistants—they break down goals, plan approaches, and adapt to obstacles. Claude's computer use feature and similar capabilities represent early agentic AI.
MoE (Mixture of Experts)
An architecture where multiple specialized "expert" sub-networks activate selectively based on the input. Allows models to be larger and more capable while keeping computation costs manageable—not every expert activates for every query.
Multimodal
AI systems that process multiple types of input—text, images, audio, video. GPT-4V, Claude's image understanding, and Gemini are multimodal. The trend is toward models that understand the world through multiple "senses" like humans do.
Vibe Coding
Building software by describing what you want in natural language and letting AI generate the code—without necessarily reading or understanding the code yourself. Coined by Andrej Karpathy in February 2025 and named Collins Dictionary's Word of the Year. Tools like Replit, Lovable, and Claude Code enable this approach. See our full module on vibe coding.
Custom GPTs / Gems / Projects
Customizable AI assistants created by users for specific purposes. OpenAI calls them "Custom GPTs," Google calls them "Gems," and Anthropic uses "Projects." You define instructions, upload reference documents, and create a specialized assistant without coding. Great for creating patient-facing tools or specialty-specific assistants.
Computer Use
AI capability to control desktop interfaces—clicking, typing, navigating applications like a human would. Claude's computer use feature and similar tools enable AI to interact with any software, not just API-connected services. An early but rapidly developing capability.

Evaluation and Safety

Eval (Evaluation)
Systematic testing of AI model performance on defined benchmarks or tasks. Evals help compare models, track improvements, and identify weaknesses. "Running evals" is standard practice in AI development.
Benchmark
Standardized tests for comparing AI model capabilities—MMLU (general knowledge), HumanEval (coding), HellaSwag (common sense reasoning). Useful for rough comparisons but can be gamed and don't always reflect real-world performance.
Guardrails
Technical controls that prevent AI systems from generating harmful, inappropriate, or off-topic content. Can include content filters, output validation, and behavioral constraints embedded in system prompts.
Red Teaming
Deliberately trying to break AI systems to find vulnerabilities before malicious actors do. Includes prompt injection attacks, jailbreaking attempts, and stress-testing safety measures. Important for responsible AI deployment.
Prompt Injection
Attacks where malicious instructions hidden in input data attempt to override an AI's intended behavior. A security concern for AI applications—an email might contain hidden instructions that manipulate an AI email assistant.
Jailbreaking
Techniques to bypass an AI model's safety measures and get it to produce content it would normally refuse. Often involves clever prompt manipulation or roleplay scenarios. A constant cat-and-mouse game between AI developers and those attempting to circumvent safeguards.
Refusal
When an AI model declines to respond to a request, typically because it violates safety guidelines. "I can't help with that" responses. Models are trained to refuse harmful, illegal, or inappropriate requests. Sometimes models refuse too cautiously (false positives) or not enough (false negatives).
Common Benchmarks
MMLU (Massive Multitask Language Understanding): Tests knowledge across 57 subjects from elementary to professional level.
GPQA (Graduate-Level Google-Proof Q&A): PhD-level science questions that can't be easily Googled.
HumanEval / SWE-bench: Coding ability tests—writing functions and solving real GitHub issues.
MedQA: Medical licensing exam questions, relevant for healthcare AI.

Companies and Models You'll Encounter

OpenAI

Created GPT-4, ChatGPT, DALL-E, o1/o3 reasoning models. The company most associated with the current AI boom. Partnered with Microsoft.

Anthropic

Created Claude. Founded by former OpenAI researchers focused on AI safety. Notable for constitutional AI, extended thinking, and computer use capabilities.

Google/DeepMind

Created Gemini, PaLM, BERT. Deep research heritage. DeepMind created AlphaFold (protein structure) and AlphaGo. Gemini 2.0 leads in multimodal.

Meta

Created Llama (open-source LLMs). Major contributor to open AI research. Llama models power many local and fine-tuned applications.

xAI

Elon Musk's AI company. Created Grok, integrated into X (Twitter). Known for fewer content restrictions than competitors.

Mistral

French AI company with strong open-source models. Known for efficient architectures that compete with larger proprietary models.

Cohere

Enterprise-focused AI company. Strong in RAG, embeddings, and business applications. Popular for companies building AI into products.

Perplexity

AI-powered search engine that answers questions with citations. Combines LLMs with real-time web search. Growing alternative to Google.


Infrastructure and Operations

API (Application Programming Interface)
How developers integrate AI models into their applications. OpenAI, Anthropic, and others offer APIs where you send prompts and receive completions programmatically. API access is usually priced per token.
GPU (Graphics Processing Unit)
The specialized processors that power AI training and inference. Originally designed for graphics rendering, their parallel processing architecture is ideal for neural networks. NVIDIA dominates the AI GPU market.
Inference
Running a trained model to generate outputs—what happens when you chat with an AI. Distinguished from training. Inference is much cheaper than training but still computationally intensive at scale.
Latency
The delay between sending a request and receiving a response. Lower latency means faster, more responsive AI applications. Measured in milliseconds.
Knowledge Cutoff
The date beyond which a model has no training data. If a model's knowledge cutoff is January 2024, it doesn't "know" about events after that date. Web search and RAG help address this limitation.
TPU (Tensor Processing Unit)
Google's custom AI chips, designed specifically for machine learning workloads. An alternative to NVIDIA GPUs. Google uses TPUs to train and run Gemini models. Amazon (Trainium) and others are developing similar custom AI chips.
Quantization
Reducing the precision of model weights (e.g., from 32-bit to 4-bit numbers) to shrink model size and speed up inference. Makes large models runnable on consumer hardware. Essential for local AI—a quantized Llama model can run on a laptop that couldn't handle the full-precision version.
GGUF
A file format for storing quantized language models, commonly used for local AI applications. If you see a model file ending in .gguf, it's designed to run locally with tools like Ollama or LM Studio. Successor to the older GGML format.
Ollama
Popular open-source tool for running LLMs locally on your computer. Handles model downloading, quantization, and provides a simple interface. The easiest way to experiment with local AI. See our module on running local models.

Emerging Terms Worth Knowing

Foundation Model
Large models trained on broad data that can be adapted for many downstream tasks. GPT-4 and Claude are foundation models.
Synthetic Data
AI-generated data used to train other AI systems. Increasingly used when real data is scarce, expensive, or privacy-sensitive.
Model Collapse
A theoretical concern where models trained on AI-generated data progressively degrade in quality. An argument for maintaining human-generated training data.
Constitutional AI
Anthropic's approach to alignment where models are trained with explicit principles rather than just human ratings.
Context Engineering
The emerging practice of systematically managing everything that goes into a model's context window—prompts, examples, retrieved documents, conversation history.

Healthcare AI Terms

Terms specific to AI applications in clinical settings—the vocabulary you'll encounter in health system AI initiatives, vendor pitches, and medical literature.

AI Scribe (Ambient AI)
AI systems that listen to patient encounters and automatically generate clinical documentation. Products like DAX Copilot, Abridge, and Freed transcribe conversations and draft notes in EHR format. Aims to reduce documentation burden and restore face-to-face patient time. See our module on ambient AI tools.
CDS (Clinical Decision Support)
AI systems that provide diagnostic suggestions, treatment recommendations, or alerts based on patient data. Ranges from simple rule-based alerts to AI-powered tools like UpToDate, OpenEvidence, and Glass Health that synthesize evidence for specific clinical questions. See our CDS module.
FDA Clearance (510(k) / De Novo)
The regulatory pathway for medical AI devices in the US. 510(k) clearance means the device is "substantially equivalent" to an existing approved device. De Novo is for novel low-to-moderate risk devices. PMA (Premarket Approval) is for high-risk devices. Over 900 AI/ML medical devices have received FDA clearance as of 2025.
CDSS (Clinical Decision Support System)
The broader category of software providing clinician-facing decision support. Includes both AI-powered and traditional rule-based systems. Often integrated into EHRs with alerts, order sets, and diagnostic suggestions.
CADe / CADx
CADe (Computer-Aided Detection): AI that identifies potential findings in medical images (e.g., flagging a possible nodule on chest CT).
CADx (Computer-Aided Diagnosis): AI that characterizes findings (e.g., suggesting a nodule is likely malignant). Radiology and pathology have the most FDA-cleared CAD tools.
LLM-as-Judge
Using one AI model to evaluate outputs from another—increasingly common in medical AI evaluation. Instead of only human review, an LLM assesses whether AI-generated clinical content is accurate, complete, and appropriate. Useful for scaling quality assessment but requires validation.
Medical LLMs
Language models fine-tuned or designed specifically for healthcare applications. Examples include Med-PaLM (Google), BioMistral, and various clinical GPT variants. Often trained on medical literature and clinical notes to improve healthcare-specific performance over general-purpose models.
De-identification
Removing or obscuring protected health information (PHI) from data before AI processing. HIPAA defines 18 identifiers that must be addressed. AI tools can help automate de-identification, but verification remains important. See our PHI and HIPAA module.
BAA (Business Associate Agreement)
Legal agreement required under HIPAA when sharing PHI with third parties, including AI vendors. A BAA establishes the vendor's obligations to protect health data. Using AI tools for patient data without a BAA violates HIPAA. Check whether your AI tools have BAA options.
Algorithmic Bias in Healthcare
When AI systems produce systematically different (often worse) outcomes for certain patient populations. Can arise from training data that underrepresents groups, features that correlate with race/ethnicity, or historical disparities embedded in the data. A significant concern for healthcare AI equity.

Multi-Modal Learning Resources

Online Courses

DeepLearning.AI: ChatGPT Prompt Engineering for Developers

Free

Taught by Andrew Ng and OpenAI's Isa Fulford. The gold-standard introduction—1.5 hours covering LLM fundamentals, prompting best practices, and practical applications. Hands-on Jupyter notebooks included. Appropriate for anyone with basic Python familiarity, but useful insights for non-coders too.

deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers

Google Prompting Essentials

Free

Google's structured 5-step prompting framework. Covers multimodal prompting, prompt chaining, and building reusable prompt libraries. Under 6 hours. No technical background required—designed for practical workplace application.

grow.google/prompting-essentials

IBM: Generative AI - Prompt Engineering Basics

Audit Free (Coursera)

Solid overview from IBM covering zero-shot, few-shot, and chain-of-thought techniques. Includes hands-on labs and graded assessments. Good for those wanting a certificate to demonstrate competency.

coursera.org/learn/generative-ai-prompt-engineering-for-everyone

Learn Prompting: ChatGPT for Everyone

Free

Open-source guide created in collaboration with OpenAI. Comprehensive and research-backed. Covers basics through advanced techniques. Over 3 million learners. The written documentation is excellent for reference.

learnprompting.org

Udemy: Complete Prompt Engineering for AI Bootcamp

More comprehensive paid option covering GPT models, image generation (Midjourney, DALL-E), and coding assistants. Regularly updated. Good if you want one exhaustive resource.

Podcasts

Practical AI (Changelog)

Weekly discussions on AI applications in the real world. Hosted by Daniel Whitenack and Chris Benson. Balances technical depth with accessibility. Good for staying current on practical developments.

TWIML AI Podcast (This Week in Machine Learning & AI)

Hosted by Sam Charrington. In-depth interviews with AI researchers and practitioners. More technical but explained well. Strong back catalog for topic-specific deep dives.

The AI Podcast (NVIDIA)

Bi-weekly episodes featuring AI innovators across industries—healthcare, climate, entertainment. Production quality is high. Good for understanding AI applications beyond pure tech.

Latent Space

Aimed at AI engineers and builders. Hosts Alessio Fanelli and Swyx cover latest developments, new research, and practical implementation. More technical but current and well-informed.

Cognitive Revolution

Hosted by Nathan Labenz. Focus on transformative potential of AI—interviews with researchers, founders, and investors. Good for understanding where the field is heading.

Hard Fork (New York Times)

Not AI-specific but covers AI developments extensively. Kevin Roose and Casey Newton offer accessible, journalist-perspective coverage of tech and AI news. Good for staying broadly informed.

DeepMind: The Podcast

Hosted by mathematician Hannah Fry. Explores AI research and its implications. Award-winning production. Less frequent but high quality. Good for deeper conceptual understanding.

YouTube Channels and Videos

freeCodeCamp: Learn Prompt Engineering – Full Course

Ania Kubow's comprehensive crash course covering fundamentals through advanced techniques. Free, thorough, beginner-friendly. Good for visual learners who want structured progression.

DeepLearning.AI YouTube Channel

Andrew Ng's organization posts short course previews and standalone AI education content. Authoritative and well-produced.

AI Explained

Thoughtful analysis of AI developments, model comparisons, and capability assessments. Good for understanding what new releases actually mean rather than hype cycles.

Reading and Reference

Prompt Engineering Guide (DAIR.AI)

Open-source comprehensive guide with 3+ million users. Covers techniques, applications, and research papers. Constantly updated. The definitive written reference.

promptingguide.ai

OpenAI's Prompt Engineering Guide

Direct from OpenAI—best practices for their models specifically. Authoritative for GPT usage patterns.

Anthropic's Prompt Engineering Documentation

Claude-specific guidance from Anthropic. Detailed best practices for Claude's particular strengths and patterns.

docs.anthropic.com

IBM Think: The 2025 Guide to Prompt Engineering

Comprehensive enterprise-oriented guide with practical frameworks and examples.

ibm.com/think/prompt-engineering

Practice Environments

ChatGPT (OpenAI)
Most widely used interface. Free tier available.
Claude (Anthropic)
Strong for long documents, analysis, and coding.
Google AI Studio
Access to Gemini models with generous free tier.
Hugging Face
Open-source model library with free inference API.

How to Use These Resources

Time-Based Recommendations
  • 15 minutes: Read the foundational sections of Learn Prompting or scan OpenAI's prompt engineering guide.
  • 2 hours: Complete DeepLearning.AI's ChatGPT Prompt Engineering course.
  • A week: Work through Google Prompting Essentials, supplement with podcast episodes during commutes.
  • Building AI applications: DAIR.AI's Prompt Engineering Guide plus the technical documentation from your chosen model provider.
  • Ongoing learning: Subscribe to 2-3 podcasts and follow AI Explained on YouTube. The field moves fast—regular touchpoints prevent knowledge decay.

Last updated: November 2025. AI terminology evolves rapidly—new terms emerge and meanings shift. When in doubt, check primary sources from model providers.