Few-shot Prompting
Definition
Few-shot prompting provides 2-5 examples of desired input-output pairs before the actual task, helping the LLM understand the expected format, style, and behavior through demonstration.
Why It Matters
Few-shot prompting dramatically improves consistency and accuracy for many tasks. By showing the model exactly what you want through examples, you reduce ambiguity and get more predictable outputs. This is especially valuable for custom formats, domain-specific tasks, or nuanced classification.
How It Works
Structure your prompt with examples before the actual query:
Classify the sentiment:
Review: "Amazing product, exceeded expectations!" → Positive
Review: "Worst purchase ever, total waste." → Negative
Review: "It's okay, nothing special." → Neutral
Review: "[user's actual review]" →
The model learns the pattern from examples and applies it to new inputs.
When to Use
Use few-shot prompting when: zero-shot produces inconsistent results, you need a specific output format, the task requires domain knowledge, or you’re doing nuanced classification. Choose examples that cover edge cases and represent the diversity of inputs you expect.