Prompt Engineering: A New Developer Skill
ChatGPT made everyone a prompt engineer overnight. But there’s actual technique behind the art. Here’s what works.
The Basics
Good prompts are:
- Specific: Vague input → vague output
- Structured: Clear format → clear response
- Contextualized: Background helps the model
Bad prompt:
Write something about Python
Better prompt:
Write a 200-word explanation of Python list comprehensions
for developers who know JavaScript but are new to Python.
Include a comparison to Array.map().
Core Techniques
1. Role Assignment
You are an experienced Django developer who specializes in API design.
Review this endpoint and suggest improvements:
[code here]
The model adopts the persona’s expertise.
2. Few-Shot Examples
Convert these sentences to formal language:
Input: Hey, can we meet tmrw?
Output: I would like to schedule a meeting with you tomorrow.
Input: The code's kinda broken lol
Output: The code appears to have some issues that need addressing.
Input: Can u fix that bug soon?
Output:
The model learns the pattern from examples.
3. Chain of Thought
Solve this step by step:
A store has 25 apples. They sell 40% and receive a shipment of 15 more.
How many apples do they have?
Let's think through this:
1. First, calculate 40% of 25...
Forcing explicit reasoning improves accuracy.
4. Output Format Specification
Analyze this code for potential issues.
Respond in this JSON format:
{
"issues": [
{
"line": <number>,
"severity": "high|medium|low",
"description": "<what's wrong>",
"fix": "<suggested fix>"
}
]
}
Structured output is easier to parse and use.
Developer-Specific Patterns
Code Review
Review this Python function for:
1. Security vulnerabilities
2. Performance issues
3. Readability problems
4. Edge cases not handled
For each issue, provide:
- Line number
- Problem description
- Suggested fix
```python
def get_user(user_id):
query = f"SELECT * FROM users WHERE id = {user_id}"
return db.execute(query)
### Documentation Generation
Generate a docstring for this function. Follow Google style docstrings. Include:
- Brief description
- Args with types
- Returns with type
- Raises (if applicable)
- Example usage
def calculate_compound_interest(principal, rate, time, n=12):
return principal * (1 + rate/n) ** (n * time)
### Bug Fixing
This code has a bug. It should [expected behavior], but instead it [actual behavior].
Debug and fix:
[buggy code]
Explain what causes the bug and show the corrected code.
### Test Generation
Generate pytest test cases for this function. Include:
- Happy path tests
- Edge cases
- Error conditions
- Use parametrize where appropriate
def divide(a: float, b: float) -> float:
return a / b
## Advanced Techniques
### Iterative Refinement
First prompt
Generate a basic FastAPI endpoint for user registration.
After response
Now add:
- Input validation with Pydantic
- Password hashing
- Error handling
After response
Add comprehensive docstrings and type hints.
Break complex tasks into steps.
### Self-Critique
Generate a solution, then immediately review it for:
- Potential bugs
- Performance issues
- Security vulnerabilities
Provide the improved version after your critique.
### Constraint Setting
Write a Python function that:
- Uses only standard library (no external packages)
- Has O(n) time complexity
- Uses less than 50 lines
- Includes type hints
### Negative Prompting
Explain async/await in Python.
Do NOT:
- Use jargon without explanation
- Skip intermediate steps
- Assume prior knowledge of event loops
## Common Mistakes
### Being Too Vague
❌ "Make this better"
✅ "Refactor this to separate concerns, using dependency injection for the database connection"
### Ignoring Context
❌ Pasting code without explaining the project
✅ "This is a Django REST Framework viewset for a multi-tenant SaaS application..."
### Over-Prompting
❌ 500-word prompts with every possible instruction
✅ Focused prompts that iterate
### Not Verifying Output
❌ Copy-paste AI output directly to production
✅ Review, test, understand, then use
## Tools of the Trade
| Tool | Use Case |
|------|----------|
| ChatGPT | Interactive exploration |
| GitHub Copilot | In-editor completion |
| OpenAI Playground | Testing prompt variations |
| LangChain | Programmatic prompting |
## Prompt Templates
I keep a file of proven prompts:
```yaml
code_review:
template: |
Review this {language} code for:
- Security issues
- Performance problems
- Best practice violations
{code}
explain_code:
template: |
Explain this code as if to a {level} developer.
Focus on: {aspects}
{code}
debug:
template: |
Expected: {expected}
Actual: {actual}
{code}
Find and fix the bug.
The Reality
Prompt engineering isn’t wizardry. It’s:
- Clear communication
- Understanding model limitations
- Iterative refinement
- Verification of outputs
The best prompt engineers are good at explaining what they want—which makes them good developers anyway.
The model is only as good as the instructions you give it.