Back to Glossary
LLM

System Prompt

Definition

A system prompt is special instructions provided to an LLM that define its behavior, persona, capabilities, and constraints, typically set by developers and persistent across a conversation.

Why It Matters

System prompts define your AI’s personality and capabilities. They’re the difference between a generic chatbot and a specialized assistant. Without clear system prompts, you get inconsistent behavior. The model might be helpful one moment and refuse the same request the next.

In production applications, system prompts handle safety guardrails, output formatting, and domain constraints. They persist across user messages, so users can’t easily override critical instructions. This separation between “developer instructions” and “user input” is fundamental to building reliable AI products.

For AI engineers, system prompts are where you encode business logic. What should the assistant know? What should it refuse? What format should responses take? These decisions live in your system prompt.

Implementation Basics

Core Components

  • Role/persona: “You are a customer support agent for…”
  • Knowledge boundaries: “You only answer questions about our product…”
  • Output format: “Always respond in JSON with fields…”
  • Safety rules: “Never provide medical advice…”
  • Tone guidance: “Be concise and professional…”

API Implementation Most APIs support a “system” message type:

  • OpenAI: {"role": "system", "content": "..."}
  • Anthropic: System parameter separate from messages
  • Local models: Varies by implementation

Best Practices

  1. Keep it focused, don’t try to cover every edge case
  2. Test with adversarial inputs to verify constraints hold
  3. Version your system prompts like code
  4. Balance specificity with token efficiency

Common Patterns

  • Include current date/time if relevant to the task
  • Specify what to do when uncertain (ask clarifying questions vs. make assumptions)
  • Define how to handle out-of-scope requests
  • Include format examples inline for complex outputs

Security Note System prompts aren’t secret. Users can often extract them through creative prompting. Never put API keys, passwords, or truly sensitive information in system prompts. Use them for behavior, not secrets.

Source

OpenAI's API uses a 'system' role for high-level instructions that guide the model's behavior throughout the conversation.

https://platform.openai.com/docs/guides/text-generation