Prompt
A prompt is the input or instruction given to a Large Language Model (LLM) to guide its response. It acts as the starting point for generation, defining the context, intent, and constraints of the desired output.
Purpose
Prompts shape the behavior of the model—what it says, how it says it, and how relevant or accurate the response will be. The more precise and goal-oriented the prompt, the better the model’s output tends to be.
Characteristics of an Effective Prompt
- Clear: Unambiguous language reduces confusion.
- Specific: Details help focus the response.
- Well-structured: Formatting (lists, instructions, delimiters) improves consistency.
- Goal-oriented: Guides the model toward a desired result (e.g., summarizing, classifying, generating code, etc.)
Prompt Formats
- Instructional: “Summarize this article…”
- Conversational: “What do you think about…”
- Few-shot: Providing examples to guide behavior
- Zero-shot: No examples, just task description
- System Prompting: Used to define model behavior at the start (e.g., “You are a helpful assistant.”)
Prompt Anatomy
- Role Definition (optional): “You are a financial analyst…”
- Task Description: “Analyze this earnings report…”
- Constraints: “Respond in under 100 words…”
- Input Content: “Here’s the text: …”
Prompting Tips
- Use delimiters (
"""
, XML tags, etc.) to separate inputs. - Ask for specific structure in the output (JSON, list, Markdown).
- Break complex tasks into steps.
- Avoid ambiguity (e.g., vague verbs like “do this”).
- Iterate and test—prompting is part art, part engineering.