A prompt is the input given to an AI model. In simple examples it may look like a single question, but in real systems a prompt often includes much more: system instructions, examples, constraints, retrieved documents, tool outputs, formatting rules, and the user's actual request. Thinking of a prompt as "everything the model sees" is usually more accurate than thinking of it as one sentence.
Why Prompts Matter
Models respond to the information and structure they are given. A vague prompt can produce a vague answer. A well-scoped prompt can make the same model more useful by clarifying the task, the audience, the desired format, and the boundaries of what counts as a good answer.
This is also why prompts are tied to context and cost. Every instruction, example, and retrieved passage takes up space in the model's context window. Good prompt design is therefore partly about deciding what should be included and what should be left out.
Prompts in Real Systems
In production applications, prompts are often layered. A system prompt may define role and policy. Retrieved evidence may provide grounding. Tool results may be inserted mid-workflow. The final response is shaped by the whole bundle, not only by the user's last message.
That is why prompts are both simple and deep. Everyone can type one, but designing prompts for reliable, safe, and repeatable work is a serious systems problem.
Related concepts: Prompt Engineering, System Prompt, Context Window, Grounding, and Large Language Model (LLM).