AI Glossary - Yenra

Curated glossary of core artificial intelligence terms

Yenra AI Glossary
Yenra AI Glossary

This curated glossary focuses on the concepts most likely to help readers understand modern AI, machine learning, and generative systems without the long tail of highly specialized hardware, statistical, and research-only jargon.

A

Activation Function: A mathematical function that adds nonlinearity to a neural network so it can learn more than simple linear patterns.

Active Learning: A training strategy in which the model asks for labels on the most informative examples instead of labeling everything at once.

Active Noise Control (ANC): Reducing unwanted sound by generating anti-noise through microphones, speakers, and control algorithms.

ADMET: The absorption, distribution, metabolism, excretion, and toxicity filters that shape whether a molecule is likely to become a viable drug.

Account Reconciliation: Matching balances, transactions, or supporting records across systems so discrepancies and unresolved exceptions can be found and cleared.

Account Takeover: When an attacker gains control of a legitimate user's account and begins acting as that user.

Additive Construction: Using digitally controlled layer-by-layer fabrication to create construction components or structures directly from model data.

Adversarial Attack: A deliberate attempt to fool a model with input designed to make it fail.

Adversarial Example: A specially crafted input that looks normal to people but causes a model to make a mistake.

Adversarial Machine Learning: The field that studies how AI systems are attacked, manipulated, and defended.

AI (Artificial Intelligence): The broad field of building systems that can perform tasks associated with perception, language, reasoning, and decision-making.

AI Agent: A software system that can interpret goals, use tools, and take actions with some autonomy.

AI Assurance: The testing, evidence, and review work used to show that an AI system is behaving as claimed and controlled well enough for its context.

After-Call Work (ACW): The wrap-up work an agent completes after an interaction, such as notes, summaries, disposition codes, and CRM updates.

Age Assurance: Estimating or verifying whether someone is above, below, or within an age range so a service can apply the right access and safety rules.

Agent Assist: Real-time AI support that helps a human agent with knowledge, prompts, summaries, and next-best actions during a live interaction.

Advanced Process Control (APC): A model-based control layer that adjusts process settings using measurements, predictions, and feedback.

Advanced Driver Assistance Systems (ADAS): Vehicle technologies that help a human driver with warnings, braking, steering, and workload reduction without making every car fully autonomous.

Advanced Metering Infrastructure (AMI): The metering, communications, and data systems that make interval electricity data and smarter grid interaction possible.

Affective Computing: AI systems that estimate, model, or respond to human affect and emotion from signals such as text, voice, facial expression, or behavior.

Augmentative and Alternative Communication (AAC): Communication methods and tools that support or replace speech when a person cannot rely on spoken language alone.

AI Content Moderation: Using AI to review, filter, rank, or escalate content that may violate rules or safety standards.

AI Data Labeling: The process of tagging data so supervised models can learn from it.

AI Fairness: The effort to make AI systems behave equitably across people, groups, and contexts.

AI Firewall: A security layer that inspects AI inputs, actions, and outputs for threats, misuse, or policy violations.

AI-Generated Content (AIGC): Text, images, audio, video, or code created by AI systems.

Algorithm: A set of instructions or rules used to solve a problem or perform a computation.

Algorithmic Trading: Using software rules and models to generate, route, or manage orders in financial markets.

Algorithmic Bias: Systematic skew in a system that leads to unfair or distorted outcomes.

Agent-Based Modeling: Simulating a system through many interacting agents whose behavior and feedback loops shape the larger outcome.

Alignment (AI Alignment): The effort to make AI systems follow human goals, instructions, and safety expectations.

Ambient Computing: Computing woven into devices and environments so assistance can appear in context instead of always requiring an explicit app session.

Anomaly Detection: Finding unusual data points or events that differ sharply from normal patterns.

Architecture (AI Model Architecture): The overall design and structure of a model, including its layers and connections.

Archives: Organized collections of records and materials preserved because they have long-term historical, legal, cultural, or operational value.

Artificial General Intelligence (AGI): A hypothetical AI with broad human-like ability across many tasks rather than strength in one narrow domain.

Attention Mechanism: A way for a model to focus on the most relevant parts of its input when producing an output.

Attribution: Assigning authorship, origin, source, or responsibility to a work, record, object, or other output.

Audience Segmentation: Grouping people into useful audience buckets or modeled cohorts for targeting, personalization, or measurement.

Audio Restoration: Using AI to denoise, declip, inpaint, and rebuild damaged recordings so they become more usable again.

Authentication: Confirming that a person, document, object, or piece of content is genuine and really what it claims to be.

Automatic Defect Classification (ADC): Using AI to sort detected defect candidates into actionable classes, nuisance, or unknown categories so review teams can respond faster.

Autoencoder: A neural network trained to compress data into a compact representation and reconstruct it again.

Automatic Speech Recognition (ASR): Technology that converts spoken language into text.

Automated Machine Learning (AutoML): Using software to automate parts of model training, tuning, and evaluation.

Automated Valuation Model (AVM): A computerized property valuation system that estimates real-estate value from comparable sales, property attributes, and market signals.

B

Backpropagation: The training process that moves error information backward through a neural network so its weights can be updated.

Benchmarks: Standard tests used to compare models on common tasks or datasets.

BACnet: A widely used communications standard that helps building automation devices exchange data and commands across vendors.

Baggage Reconciliation: Confirming that a checked bag stays correctly matched to the right passenger, flight, custody state, and handling decision as airport operations evolve.

Battery Management System (BMS): The control system that monitors an EV battery pack and manages charging, power limits, safety, and battery-health estimation.

Beamforming: Using an array of microphones or other sensors to focus on sound from one direction or location while reducing interference from others.

Beyond Visual Line of Sight (BVLOS): Flying a drone beyond the range where the remote pilot can directly see it, which raises the need for stronger sensing, procedures, and airspace integration.

Berth Allocation: Assigning vessels to berth positions and time windows so crane plans, yard readiness, and port-side workflows stay coordinated.

Behavioral Biometrics: Authentication and fraud-detection methods that identify people by how they type, swipe, move, or otherwise behave.

BERT (Bidirectional Encoder Representations from Transformers): An influential transformer-based language model designed for understanding text with bidirectional context.

Borescope Inspection: Inspecting the inside of an engine or other hard-to-reach component with a camera probe so condition can be assessed without full disassembly.

Bias: Systematic skew or error in data, modeling, or decisions that can distort results or create unfair outcomes.

Bias Mitigation: Methods for identifying, reducing, and monitoring unfair bias in AI systems.

Bias-Variance Tradeoff: The balance between a model that is too simple to capture patterns and one that is too sensitive to the training data.

Binary Classification: A task in which each example must be assigned to one of two classes.

Black Box Model: A model whose internal reasoning is hard for humans to inspect or explain directly.

Brand Lift: Measuring whether advertising changed awareness, recall, favorability, consideration, or related brand outcomes compared with a control group.

Brand Safety: Keeping ads and generated creative away from harmful, unsuitable, misleading, or policy-violating content and contexts.

Building Information Modeling (BIM): A structured digital representation of a building and its components that supports coordination, simulation, estimating, construction, and operations.

C

Calibration: The degree to which a model's confidence matches what really happens.

Call Deflection: Resolving routine issues through self-service or alternate channels so avoidable live-agent calls never have to enter the voice queue.

Candidate Generation: The retrieval stage that narrows a huge pool of possible items into a smaller set worth ranking in detail.

Causal Inference: Estimating what changed because of an intervention rather than what merely moved alongside it.

Cataloging: The structured process of describing an item so it can be identified, found, and managed later.

Chain of Thought (CoT): A prompting style that encourages a model to work through intermediate reasoning steps.

Change Detection: Comparing observations across time to determine what changed, where it changed, and how much it changed.

Chatbot: A system that interacts with users in natural language through text or voice.

Classification: The task of assigning an input to one of several categories.

Clinical Decision Support: Software that uses patient data, rules, models, or retrieved evidence to help clinicians make safer and better-informed decisions.

Collaborative Robot (Cobot): A robot designed to work more safely near people and to make more mixed, flexible automation tasks practical.

Clustering: Grouping similar items together without using predefined labels.

Cognitive Accessibility: Designing content and interfaces so people can understand, remember, navigate, and complete tasks with less mental effort.

Cognitive Radar: A radar approach that uses feedback and adaptive sensing so waveform, beam, or scheduling choices can change based on what the system just observed.

Cold Start: The recommendation problem that appears when a new user, item, or context has too little history for confident prediction.

Collections Management: The ongoing work of organizing, tracking, preserving, and governing collections over time.

COLREGs: The international maritime rules of the road that govern how vessels determine right of way, avoid collisions, and signal intent at sea.

Combined Heat and Power (CHP): Generating electricity and useful heat from the same fuel source so less energy is wasted overall.

Conservation: The active care and protection of objects, records, and heritage materials to slow deterioration and maintain their integrity.

Computer Vision: The branch of AI that helps systems interpret images and video.

Computational Aesthetics: Using AI to estimate formal visual qualities such as composition, style, balance, and perceived appeal in images or artworks.

Computational Fluid Dynamics (CFD): Using numerical simulation to estimate how fluids move, interact with surfaces, and create forces, heat, pressure, or mixing effects.

Confidence: The system's stated or implied degree of certainty about an output, prediction, or match.

Continuous Controls Monitoring (CCM): Checking live controls and current evidence on an ongoing basis so drift, exceptions, and broken compliance workflows are found before the next audit cycle.

Continuous Authentication: Reassessing identity during a session instead of trusting one successful login forever.

Confusion Matrix: A table that shows how often a classifier makes each kind of right and wrong prediction.

Convolutional Neural Network (CNN): A neural network architecture especially useful for images and other grid-like data.

Context Window: The amount of prior input a model can consider in a single interaction.

Contextual Targeting: Showing ads or recommendations based on the surrounding content or moment rather than mainly on long-term user identity.

Contract Lifecycle Management (CLM): Managing the full contract workflow from request and drafting through negotiation, approval, execution, obligations, renewals, and amendments.

Conversation Intelligence: Using AI to turn calls, meetings, and other conversations into searchable structure, topics, sentiment, and workflow signals.

Conversational Commerce: Shopping flows that use natural language, recommendations, and live product data to guide product discovery and buying.

Cover Crops: Crops grown mainly to protect and improve the soil system between or alongside cash crops rather than only for direct harvest.

Creative Fatigue: The decline in ad performance that happens when audiences see the same or too-similar creative too often.

Cross-Validation: A method of testing a model by training and validating it across multiple different splits of the data.

Cross-Lingual Information Retrieval (CLIR): Finding relevant documents in one language when the search query is written in another.

Cycle Counting: Regularly verifying selected inventory locations or SKUs so stock records stay accurate without shutting down for a full physical inventory.

D

Data Augmentation: Expanding a dataset by creating modified versions of existing examples.

Data Clean Room: A controlled environment where multiple parties can analyze combined data with privacy rules and aggregation safeguards.

Data Drift: A change in the input data over time that can hurt model performance.

Data Governance: The policies and controls that determine how data is collected, managed, protected, and used.

Demand Response: Reducing or shifting electricity use in response to grid conditions, prices, or utility signals.

De-Identification: Removing or transforming identifying details so data can be analyzed or shared with lower privacy risk.

Device Fingerprinting: Using device, browser, network, and environment clues to estimate whether an access event fits a known pattern.

Data Labeling: Adding tags or annotations to data so a model can learn the target output.

Data Preprocessing: Cleaning and transforming raw data before training or inference.

Deep Learning: A branch of machine learning based on multi-layer neural networks.

Deepfake: AI-generated or AI-altered media designed to convincingly imitate a real person's appearance or voice.

Decoder: The part of a model that generates output from an internal representation or from encoded input.

Differential Privacy: A formal way to reduce how much any one person's data can be inferred from a result or model.

Diffusion Models: Generative models that start from noise and gradually turn it into structured output.

Direct Indexing: Owning the underlying securities of an index so a portfolio can be customized, rebalanced, and tax-managed more precisely.

Digital Thread: A connected flow of lifecycle data that links design, production, operation, and service information.

Digital Accessibility: Designing software and digital content so people with different abilities, devices, and assistive needs can use it effectively.

Digital Identity: The reusable credentials, proofs, and trust layer that lets a person or organization prove who they are across online services.

Digital Product Passport (DPP): A machine-readable lifecycle record tied to a specific product so provenance, materials, repairs, resale, and compliance data can travel with it.

Digital Twin: A live digital representation of a physical asset or process that stays connected to operational data.

Dissolved Oxygen: The amount of oxygen mixed into water and available to aquatic life, which strongly shapes fish stress, respiration, and closed-system stability.

Digitization: Converting physical or analog material into digital form so it can be stored, searched, and reused more effectively.

Driver Monitoring System: A vehicle system that checks whether the human driver is attentive, alert, and ready to supervise or take over.

Document AI: AI systems that read, classify, extract, and route information from documents; often called intelligent document processing (IDP).

Dynamic Pricing: Updating prices as demand, inventory, competition, or context changes instead of relying only on static price lists.

Dynamic Creative Optimization (DCO): Automatically combining, testing, and promoting ad assets so different viewers and placements get more effective creative variants.

E

Embedding: A numeric vector representation that captures semantic similarity between items such as words, images, or documents.

Edge Computing: Processing data and running software close to where it is generated instead of sending every decision to a faraway cloud.

Early Intervention System (EIS): A supervisory workflow that flags when an officer may need coaching, support, training, or closer review before risk escalates.

Electronic Health Record (EHR): A digital longitudinal record of patient care data such as notes, medications, orders, labs, and encounter history.

Earthquake Early Warning (EEW): Detecting an earthquake just after it begins and issuing alerts before the strongest shaking arrives.

E-Discovery: Using AI and review workflows to collect, prioritize, analyze, and validate large populations of electronic documents for litigation or investigations.

Encoder: The part of a model that transforms input into a useful internal representation.

Entity Extraction and Linking: Identifying important entities in text and connecting each mention to the correct real-world or database entry.

Entity Resolution: Deciding when different records really refer to the same person, company, account, or other real-world entity.

Epoch: One full pass through the training dataset during model training.

Ethical AI: Building and using AI in ways that respect fairness, accountability, privacy, and human values.

Evidence: The records, measurements, retrieved sources, or other signals that support a claim, decision, or model output.

Explainability: The broader practice of making a system's outputs, reasoning, evidence, and limits understandable to people.

Explainable AI (XAI): The field of making model behavior and outputs easier for people to understand.

F

F1 Score: A metric that combines precision and recall into one balanced measure.

Factor Investing: Building or analyzing portfolios through systematic exposures such as value, momentum, quality, size, or low volatility.

Face Identification: Searching one face against many enrolled identities to find likely candidates or a match above threshold.

Face Verification: Checking whether one presented face matches one claimed or enrolled identity.

Facings: The number of visible units of a product shown to the shopper on the shelf.

Feed Ranking: The process of ordering eligible posts, videos, or updates so a personalized feed shows the most relevant items first.

Feature Engineering: Creating or refining input variables so a model can learn more effectively.

Federated Learning: Training a shared model across many devices or organizations without pooling all raw data in one place.

Fault Detection and Diagnostics (FDD): Using rules, models, and live operational data to find faults in equipment or controls and explain what is likely going wrong.

FHIR: Fast Healthcare Interoperability Resources, a standard for structuring and exchanging healthcare data through modern APIs.

Fine-Tuning: Adapting a pretrained model to a narrower task or domain with additional training.

Flue Gas Cleaning: The systems that remove particulates, acid gases, metals, and other pollutants from combustion exhaust before release.

Forgery: A deceptive imitation or alteration meant to pass as a genuine object, document, identity, or piece of media.

Fraud Detection: Using analytics and AI to identify suspicious behavior, transactions, or impersonation.

Function Calling: A way for a model to produce structured arguments for a tool or function instead of free text alone.

G

Generative Adversarial Network (GAN): A generative architecture in which one network creates outputs and another tries to detect whether they are fake.

Generative Artificial Intelligence (GenAI): AI systems that create new content such as text, images, audio, video, or code.

Geofencing: Using virtual boundaries around places, assets, or moving work zones so software can trigger alerts, rules, or actions when something enters, exits, or gets too close.

Geographic Information System (GIS): Software and data systems for storing, visualizing, analyzing, and managing spatial information.

Geoparsing: Extracting place names from text and linking them to the correct real-world locations.

Gesture Recognition: Using AI and sensors to interpret hand, body, or motion signals as interface commands.

Gaze Tracking: Using eye position and eye movement to estimate where a person is looking so an interface can respond.

GPT (Generative Pre-trained Transformer): A family of transformer-based language models trained to predict the next token in text.

Graph Neural Network (GNN): A neural network built to learn from graph-structured data such as molecules, transaction networks, and knowledge graphs.

GraphRAG: A retrieval-augmented generation pattern that uses graph structure to retrieve, connect, and organize evidence more coherently for grounded answers.

Grounding: Connecting a model's output to trusted sources, retrieved evidence, tools, or real-world state.

Ground Truth: The verified real-world state or trusted label set used as the reference point for training, evaluation, or operational measurement.

Guardrails: Filters, rules, and runtime checks that keep an AI system within desired safety or workflow boundaries.

H

Hazard Analysis and Critical Control Point (HACCP): A preventive food-safety system that identifies hazards, defines critical controls, and verifies that food stays safe as it moves through production and handling.

Hallucination: When an AI system produces plausible-sounding information that is false, unsupported, or ungrounded.

Handwriting Recognition: Using AI to read handwritten notes, forms, and manuscripts and convert them into machine-readable text.

Hydraulic Model Calibration: Tuning a water network model so simulated flows, pressures, and states stay aligned with field measurements.

Hydroponics: Growing plants in nutrient solution, often with inert media, so water, chemistry, and root-zone conditions can be controlled more precisely.

Hyperspectral Imaging: Capturing many narrow wavelength bands so materials can be identified by their spectral signatures rather than by ordinary color alone.

Human in the Loop: A workflow in which people remain involved in review, correction, approval, or decision-making instead of leaving everything to the model.

Hyperparameter: A setting chosen by the developer, such as learning rate or batch size, rather than learned by the model itself.

Hyperparameter Tuning: The process of searching for better hyperparameter settings to improve model performance.

I

Identity Proofing: Establishing that a person is who they claim to be during enrollment before later authentication begins.

Image Classification: The task of assigning an image to one or more categories based on what it contains.

Image Generation: Creating new images with AI, often from text prompts or other visual input.

Inference: Using a trained model to produce output on new input data.

Infrasound: Low-frequency sound below normal human hearing that can reveal eruptions, explosions, storms, and other energetic events over long distances.

Inpainting: Filling, repairing, replacing, or extending missing and masked parts of an image or other media so they match the surrounding context.

In-Situ Monitoring: Watching a process while it is happening so deviations can be detected, interpreted, and sometimes corrected before the job is finished.

InSAR: Using repeated radar images to measure subtle ground deformation across earthquakes, volcanoes, subsidence, and other Earth-surface change.

Inventory Visibility: Knowing what inventory exists, where it is, and how available it really is across stores, warehouses, and channels.

Incrementality: Measuring what effect a campaign or intervention caused beyond what would likely have happened anyway.

Intent Recognition: Identifying what a user is trying to accomplish so a conversational system can route the interaction toward the right next step.

Inverse Design: Working backward from a target behavior to propose structures, parameters, or recipes that can produce it.

Itinerary Optimization: Turning destinations, timing, routing, budget, and traveler constraints into a feasible trip plan that can adapt when conditions change.

Intelligent Tutoring System (ITS): Educational software that models a learner's progress and provides hints, explanations, and practice that adapt to the student over time.

Instruction Tuning: Post-training that teaches a model to follow user requests more reliably.

Interoperability: The ability of different systems to exchange data and use it consistently without losing meaning.

Interpretability: The degree to which humans can understand how and why a model behaves the way it does.

J

Jailbreaking: Manipulating a language model so it bypasses intended restrictions or safety rules.

K

Knowledge Distillation: Training a smaller model to imitate a larger one so it becomes cheaper to deploy.

Knowledge Graph: A structured network of entities and relationships that makes connected facts easier to query and reason over.

Knowledge Tracing: Estimating how a learner's mastery changes across sequences of answers, hints, retries, and practice events over time.

L

Large Language Model (LLM): A neural network trained on large amounts of text to generate and work with language.

Language Model (LM): A model that learns patterns in language and predicts likely token sequences.

Layout Analysis: Identifying the structural parts of a page so AI can preserve reading order, tables, fields, and other document context.

Latent Space: A compressed internal representation in which models capture the underlying structure of data.

Learning Rate: A training setting that controls how large each update step is during optimization.

Link Prediction: Estimating which relationships are likely missing from a graph based on the structure already present.

Liveness Detection: Anti-spoofing checks that try to confirm a real person is present instead of a photo, replay, mask, or deepfake.

Logical Qubit: A qubit encoded across multiple physical qubits so errors can be detected and corrected before the stored quantum information is lost.

LoRA (Low-Rank Adaptation): A parameter-efficient way to adapt a large model without retraining all of its weights.

Loss Function: A formula that measures how wrong a model's predictions are during training.

Loudness Normalization: Measuring and adjusting audio so episodes, tracks, ads, or clips play back at a more consistent perceived volume.

Live-Agent Handoff: Transferring a conversation from a bot to a human agent with enough context that the customer does not have to start over.

M

ML (Machine Learning): The branch of AI in which systems learn patterns from data instead of relying only on fixed hand-written rules.

Machine Translation: Using AI to translate meaning from one language into another.

Marine Energy: Generating usable power from tides, waves, currents, thermal gradients, or salinity differences in oceans and waterways.

Market Microstructure: How order books, spreads, queues, and venue rules shape the way real trades are executed.

Marketing Mix Modeling (MMM): Using aggregated data and statistical models to estimate how channels and external factors affect marketing outcomes over time.

Matchmaking: Pairing people, teams, or other entities using mutual fit, constraints, and context rather than one-sided recommendation alone.

Matter: A smart-home interoperability standard that helps accessories from different brands work together more cleanly inside the same home.

Material Recovery Facility (MRF): A recycling facility that receives mixed material, separates it into usable commodities, and prepares those streams for sale or further processing.

Materials Informatics: Using data, models, and experiments together to discover and optimize materials faster.

Medication Verification: Checking that the medication selected, labeled, or dispensed truly matches the intended drug, dose, form, patient, and order context before it moves forward.

Metagenomics: Sequencing mixed microbial communities directly and using computation to infer which organisms, genes, and functions may be present.

Multilevel Regression and Poststratification (MRP): A survey-estimation method that combines multilevel modeling with population weighting to produce more reliable local or subgroup estimates from thin samples.

Metadata Enrichment: Adding useful tags, descriptions, relationships, and context to content so it is easier to search, organize, connect, and reuse.

Microgrid: A local energy system that can coordinate generation, storage, and loads together and sometimes operate independently from the wider grid.

Model Card: A document that explains a model's purpose, evaluation, limitations, and intended use.

Model Compression: Techniques that make models smaller, faster, or cheaper to run.

Model Predictive Control (MPC): A control approach that uses a model of the system to predict future behavior and choose actions that work best under constraints.

Model Drift: A decline or change in model behavior over time as conditions shift.

Model Evaluation: Testing a model to understand how well it performs and where it fails.

Model Explainability: The ability to communicate why a model produced a particular result.

Model Fairness: Whether a specific model behaves equitably across different groups or contexts.

Model Monitoring: Tracking a deployed model so drift, degradation, and unusual behavior are caught after launch.

Model Parameters: The values a model learns during training, such as neural network weights.

Multimodal Large Language Models (MLLMs): Models that combine language with images, audio, video, or other input types.

Multimodal Learning: Learning from or generating across more than one kind of data, such as text plus images.

Multiscale Modeling: Connecting material, process, part, and system behavior so decisions can reflect more than one physical scale at once.

Myoelectric Control: Using electrical activity from muscles to control a prosthetic or assistive device in a way that reflects user intent.

N

Named Entity Recognition (NER): Identifying references to people, organizations, places, dates, and other key entities in text.

Nanofabrication: Building micro- and nano-scale structures through patterning, deposition, etch, imprint, assembly, and measurement-aware process control.

Natural Language Processing (NLP): The branch of AI focused on understanding and generating human language.

Neural Networks: Computing systems inspired by layered neuron-like structures that learn patterns from data.

Nondestructive Testing (NDT): Inspecting a weld, structure, or component for flaws without cutting it apart or destroying it.

Non-Manual Signals: The facial expressions, head movements, mouth shapes, and upper-body cues that carry grammar and meaning in signed languages.

Nowcasting: Estimating the current state of the economy before the official statistics are fully available.

O

Object Detection: Identifying both what objects appear in an image and where they are located.

On-Device AI: Running AI features locally on a phone, laptop, vehicle, or other device instead of sending every request to a remote server.

Onboard Autonomy: Letting a spacecraft, rover, or other remote system make limited decisions locally when waiting for human commands is too slow.

Open Banking: User-permissioned access to bank and financial account data so apps and services can help with budgeting, payments, and account aggregation.

Optical Character Recognition (OCR): Converting text in scans, images, or PDFs into machine-readable text.

Optical Sorting: Using cameras and other sensors to identify materials on a moving line and separate them automatically.

Orthomosaic: A stitched and georeferenced aerial image corrected so it can be measured and compared like a map.

Operational Design Domain (ODD): The specific conditions under which an autonomous or assisted system is designed and validated to operate.

Ontology: A formal description of the important concepts, categories, and relationships in a domain.

Overfitting: When a model learns the training data too specifically and performs poorly on new examples.

P

Personally Identifiable Information (PII): Information that can identify a specific person and therefore requires careful handling and protection.

Pharmacogenomics: Studying how genetic variation changes drug response so treatment choice and dose can be tailored more safely and effectively.

Photobioreactor: A controlled vessel or tubing system for growing light-dependent organisms such as microalgae under managed illumination, gas exchange, mixing, and water chemistry.

Phenotyping: Identifying meaningful patient traits, disease patterns, or clinical subgroups from health data.

Physical AI: AI systems that act in the physical world through robots, vehicles, sensors, and control systems rather than only through software interfaces.

Pillar Two: The global minimum-tax framework that pushes large multinationals toward jurisdiction-level effective-tax-rate analysis and top-up tax calculations.

Path Planning: Choosing a route or movement sequence that helps a robot reach its goal safely and efficiently.

Parametric Design: Defining geometry and behavior through editable parameters, rules, and relationships so many design variations can be explored systematically.

Passkey: A phishing-resistant sign-in credential that often uses device biometrics or a PIN for local user verification.

Planogram: A structured retail shelf layout that specifies where products should go, how much space they get, and how the shelf should be merchandised.

Planogram Compliance: Checking whether products, facings, prices, and placement match the intended retail merchandising plan.

Plant Phenotyping: Measuring crop traits such as canopy structure, vigor, fruit load, and stress from imagery, spectra, geometry, and field sensors.

Player Modeling: Estimating how a player behaves, learns, prefers, or struggles so content, pacing, and challenge can adapt more intelligently.

Precision: The share of positive predictions that are actually correct.

Precision Beekeeping: Using sensors, imaging, acoustics, and models to manage bee colonies with earlier warnings and more targeted interventions.

Precision Aquaculture: Using sensors, imaging, telemetry, and predictive models to manage fish or shellfish production with earlier warnings and more targeted health interventions.

Predictive Analytics: Using data and models to estimate likely future outcomes, risks, or trends.

Predictive Maintenance: Using data and models to estimate when equipment is likely to degrade or fail so maintenance can happen before an outage.

Price Elasticity: How strongly demand changes when price changes, often varying by product, shopper, channel, and context rather than staying fixed.

Privacy-Enhancing Technologies (PETs): Technical methods that reduce exposure of personal data while still allowing some useful processing, sharing, or analysis.

Product Lifecycle Management (PLM): Managing product data, changes, releases, and traceability from design through manufacturing, service, and retirement.

Product Tagging: Using AI to assign structured categories, attributes, and descriptors to products at catalog scale.

Process Mining: Using event logs to reconstruct how a business process actually runs so teams can find bottlenecks, rework, and automation opportunities.

Post-Quantum Cryptography: Cryptographic methods designed to remain secure even if large-scale quantum computers become practical.

Preservation: The long-term work of keeping information, media, artifacts, or records safe, usable, and accessible over time.

Pre-trained Model: A model that has already been trained on a large dataset before being adapted to a narrower task.

Pretraining: The large-scale initial training stage that teaches a model broad patterns before task-specific adaptation.

Presence-Based Automation: Automations that respond to whether people are home, away, arriving, or moving through a space instead of relying only on fixed schedules.

Pronunciation Assessment: Using structured scoring to judge how closely a spoken sound, word, or phrase matches a target production.

Prosody: The rhythm, pitch, stress, pacing, and intonation patterns in speech that carry meaning beyond the transcript alone.

Prompt: The text, instructions, or input given to a model to guide its output.

Prompt Engineering: Designing prompts so a model produces more useful, accurate, or structured responses.

Prompt Injection: A security attack in which malicious instructions hidden in content try to override a model's intended behavior.

Provenance: The documented origin and ownership or custody history of an object, record, or other item of value.

Q

Quantization: Reducing numeric precision in a model to save memory and speed up inference.

R

Random Forest: A machine learning method that combines many decision trees to improve prediction stability.

Recall: The share of real positive cases that the model successfully catches.

Real-World Evidence (RWE): Clinical evidence developed from routine-care data such as EHRs, claims, registries, and other non-trial sources.

Recommender System: An AI system that ranks and suggests items a particular user is likely to value.

Red Teaming: Structured adversarial testing used to uncover safety, security, bias, and reliability failures.

Regression: A task in which a model predicts a numeric value rather than a category.

Regulatory Impact Assessment (RIA): A structured review of how a proposed law or regulation may affect people, markets, government operations, and compliance before adoption.

Retail Media: Advertising inside retailer-owned shopping environments and data ecosystems using commerce signals close to purchase.

RFID (Radio-Frequency Identification): Using radio tags and readers to identify physical items automatically as they move through stores, stockrooms, and fitting rooms.

Retro-Commissioning: A systematic process of testing and tuning an existing building so its systems perform closer to intended design and operational goals.

Responsive Search Ads (RSA): Search ads that mix and match multiple headlines and descriptions to find stronger combinations for different queries and contexts.

Robo-Adviser: An automated digital investment service that uses questionnaires, algorithms, and portfolio rules to guide or manage investing.

Reinforcement Learning (RL): A learning approach in which an agent improves through rewards and penalties from its environment.

Reinforcement Learning from Human Feedback (RLHF): A method that uses human preferences to shape model behavior after pretraining.

Regularization: Training techniques that reduce overfitting and help a model generalize better.

Replenishment: Restocking inventory so products are available where and when they are needed without carrying too much excess stock.

Remote ID: The broadcast identity and location layer that helps drones become more visible to airspace managers, nearby operators, and authorized responders.

Remote Sensing: Collecting imagery or other measurements from a distance so AI systems can analyze the Earth, oceans, atmosphere, or planetary surfaces.

Restoration: Repairing or reconstructing damaged, degraded, faded, or incomplete material so it becomes more legible, usable, or understandable.

Risk-Based Authentication: Adjusting authentication requirements based on how risky the current sign-in or action appears to be.

Risk-Based Monitoring (RBM): Targeting oversight toward the trial sites, subjects, data streams, and processes most likely to affect participant safety or study reliability.

Responsible AI: Building AI systems that are useful, safe, fair, accountable, and governable.

Reranking: Rescoring an initial result set with richer signals so the most useful retrieved items rise to the top.

Retrieval Augmented Generation (RAG): A pattern that combines a model with retrieved external information so answers stay fresher and better grounded.

Revenue Management: Matching price, availability, restrictions, and channel mix to demand so a business can earn more from limited capacity.

Redlining: Marking proposed edits in a contract so negotiating parties can see exactly what changed and why.

Reduced-Order Modeling: Compressing a complex physical system into a smaller model that still preserves the dominant behavior needed for fast analysis or control.

Retrosynthesis: Planning how to make a target molecule by reasoning backward from the product to simpler precursors.

Robustness: The ability of a system to keep working under noise, shift, unusual input, or active attack.

S

Sanctions Screening: Checking customers, counterparties, and transactions against sanctions and watchlists with enough precision to be useful and enough evidence to be auditable.

Self-Supervised Learning: Learning from data that provides its own training signal instead of relying only on manual labels.

Semantic Search: Search that finds results by meaning and intent, not only exact keyword matches.

Sensor Fusion: Combining signals from multiple sensors into one more reliable estimate of what is happening.

Send-Time Optimization: Using data and models to decide when a message is most likely to be opened, read, or acted on.

Sentiment Analysis: Using AI to identify whether language expresses positive, negative, neutral, or mixed attitude.

Shared Autonomy: Dividing control between a person and an intelligent system so automation can assist while the human remains responsible for supervision and takeover.

Skill-Based Matchmaking (SBMM): Matching players or teams using estimated skill so multiplayer games stay competitive, less lopsided, and healthier over time.

SLAM (Simultaneous Localization and Mapping): A robotics method for figuring out where the robot is while building a map of the space around it.

Smart Charging: Using software to decide when, where, and how fast an electric vehicle should charge.

Smart Grid: An electricity system that uses sensing, communication, software, and automation to manage supply, demand, and reliability more intelligently.

Slippage: The gap between the price a trader expected and the price the market actually delivered.

Slotting Optimization: Assigning products to warehouse locations so travel, replenishment effort, and space use stay efficient together.

SOAR: Security orchestration, automation, and response systems that connect tools and run playbooks faster.

Social Listening: Using AI to monitor, organize, and interpret social posts, comments, reviews, and other public audience signals at scale.

Source Separation: Splitting a mixed recording into more isolated components such as vocals, drums, speech, or background sound.

Spatial Computing: Computing that understands physical space, surfaces, and position so digital content can behave as if it belongs in the real world.

Space Planning: Organizing rooms, furnishings, storage, and circulation so the space works for real use instead of only looking good in a concept image.

Spaced Repetition: Scheduling review so information returns at intervals that improve long-term memory instead of fading away between study sessions.

Speaker Diarization: Figuring out who spoke when in a recording so transcripts preserve conversational structure instead of collapsing everyone together.

Speech Biofeedback: Using visual, acoustic, or sensor-based feedback to help a speaker see or hear aspects of speech production that are otherwise hard to monitor directly.

Shelf Intelligence: Using AI and computer vision to understand stock levels, placement, pricing, and other real shelf conditions in stores.

Stable Diffusion: A widely known family of diffusion-based image generation models.

Stress Testing: Testing how a system, market, or portfolio behaves under difficult but plausible conditions.

Structural Break: A change in the underlying relationships that a model relied on before.

Structural Health Monitoring: Using inspection data, sensors, and models to track whether a structure is staying sound or drifting toward damage.

Supervised Learning: Learning from labeled examples where the correct target is already known.

Support Vector Machine (SVM): A classical machine learning method often used for classification and sometimes regression.

Surrogate Model: A simplified fast-running model that approximates a more complex simulation or physical process.

Surface Code: A topological quantum error-correcting code that protects logical qubits by measuring local parity checks on a two-dimensional lattice of physical qubits.

Swarm Intelligence: Coordinating many agents through local rules, shared signals, and bounded autonomy so the group can do more collectively than any one unit could alone.

Synthetic Identity Fraud: Creating a fake but plausible identity by mixing real personal data with fabricated details so weak onboarding and fraud controls treat it as legitimate.

Synthetic Data: Artificially generated data used to train, test, or evaluate models.

Syndromic Surveillance: Monitoring symptom and encounter patterns in near real time so public-health teams can detect unusual change before confirmed diagnoses fully accumulate.

System Prompt: A high-priority instruction that sets a model's role, behavior, or constraints for an interaction.

T

Tax-Loss Harvesting: Selling positions at a loss to offset taxable gains while keeping the portfolio aligned to its long-term strategy.

Transaction Cost Analysis (TCA): Measuring how much value is lost or preserved as orders move from investment intent to real market execution.

Transaction Monitoring: Reviewing payments and account activity for patterns that may indicate suspicious, fraudulent, or non-compliant behavior.

Test Set: A held-out dataset used to measure how well a model generalizes after training.

TEFCA: The Trusted Exchange Framework and Common Agreement, a U.S. framework for nationwide health information exchange.

Telemetry: Operational signals such as events, metrics, logs, traces, and state changes that show what a device or system is doing over time.

Telematics: Using connected-vehicle data such as location, diagnostics, battery state, and driver signals to manage fleets and mobility operations more intelligently.

Trajectory Prediction: Estimating where an aircraft, vehicle, drone, or other moving system is likely to go next based on current state, intent, and environment.

Translation Memory: Reusing previously approved translated segments so multilingual content can stay more consistent across projects, releases, and recurring text.

Text Summarization: Condensing a longer document or conversation into a shorter version that preserves the main ideas.

Teleoperation: Remote control or supervision of a robot by a human operator, often blended with autonomy.

Time Series Forecasting: Predicting future values from time-ordered data such as demand, occupancy, emissions, or sensor readings.

Trend Forecasting: Using AI to detect weak signals and estimate which styles, products, topics, or behaviors are likely to rise next.

Trust and Safety: The people, policies, systems, and workflows used to keep digital services safe, lawful, and usable in the face of abuse and other harms.

Token: A unit of text a language model processes, such as a word piece, symbol, or short sequence of characters.

Tokenization: Breaking text into the tokens a model can process.

Tool Use: Letting a model call external tools, APIs, or services as part of solving a task.

Toxicity: Harmful, abusive, or hostile content that AI systems may generate, amplify, or help detect.

Training Set: The portion of data used to fit a model's parameters during learning.

Transfer Learning: Reusing knowledge from one task or dataset to improve performance on another.

Transfer Pricing: Pricing related-party transactions so profits, documentation, and tax outcomes stay supportable across jurisdictions.

Transformer: A neural network architecture built around attention that powers many modern language and multimodal models.

U

Uncertainty: The degree to which a model, dataset, or decision remains ambiguous, incomplete, or not fully known.

Underfitting: When a model is too simple to capture meaningful structure in the data.

Underwriting: Evaluating a risk to decide whether to offer coverage, on what terms, and at what price, with evidence, rules, and human review where needed.

Unsupervised Learning: Learning patterns from unlabeled data without predefined target labels.

V

Validation Set: A dataset used during model development to compare choices and tune settings before final testing.

Variational Quantum Algorithm (VQA): A hybrid quantum-classical method that repeatedly tunes quantum circuit parameters using classical optimization around measured quantum results.

Vehicle-to-Grid (V2G): Using bidirectional EV charging so vehicles can supply power back to a building or the electric grid when it makes sense.

Vector Database: A database optimized for storing and searching embeddings.

Vector Search: Finding items by semantic similarity in embedding space rather than exact keyword matching.

Verification: Checking whether a claim, identity, document, output, or piece of media is correct, genuine, or supported by evidence.

Virtual Commissioning: Testing automation and control logic in simulation before real equipment goes live.

Virtual Power Plant (VPP): A coordinated network of distributed energy resources that can be managed as a flexible power system even though the assets are spread across many locations.

Virtual Metrology: Estimating measurement results from process and equipment data instead of physically measuring every unit.

Virtual Try-On: Using AI and visual overlays to preview how makeup, accessories, or apparel may look before buying.

Visual Search: Searching with an image or camera input so the system finds visually similar objects, scenes, or products.

Voice Biometrics: Using voice characteristics as an identity signal for personalization, verification, or low-friction access control.

W

Wake-Word Detection: The lightweight speech model that listens for an activation phrase so a voice device knows when to start paying closer attention.

Waste-to-Energy: Converting residual waste into usable energy through controlled thermal, biological, or other recovery processes.

Weak Supervision: Using rules, heuristics, prompts, or other noisy signals to create useful draft labels faster than full manual annotation.

Workflow Orchestration: Coordinating the sequence of models, rules, tools, approvals, and human review steps around an AI-driven process.

Z

Zero-Shot Learning: The ability to handle a task or class the model was not explicitly trained on.

Zero-Knowledge Proof (ZKP): Proving that a statement is true without revealing the private data or secret behind it.

Zero Trust: A security model that assumes no user, device, or network location should be trusted by default.