Zero-shot Prompting
Definition
Zero-shot prompting is asking an LLM to perform a task without providing any examples, relying entirely on the model's pre-trained knowledge and the clarity of your instructions.
Why It Matters
Zero-shot prompting is the simplest and fastest approach - you just describe what you want without providing examples. Modern LLMs are remarkably capable at zero-shot tasks, making this the default starting point for most applications. It’s cheaper (fewer tokens) and easier to maintain than few-shot approaches.
How It Works
You provide only instructions without examples:
- “Translate this text to French: [text]”
- “Summarize this article in 3 bullet points: [article]”
- “Classify this review as positive, negative, or neutral: [review]”
The model uses its training to understand the task and generate appropriate outputs.
When to Use
Start with zero-shot prompting for: straightforward tasks with clear instructions, simple classifications and extractions, tasks the model is likely trained on, and rapid prototyping. If zero-shot performance is insufficient, graduate to few-shot prompting by adding examples that demonstrate the desired behavior.