Prompt engineering sounds like a buzzword. It's actually just the skill of communicating clearly with AI systems to get useful output. For developers using AI coding tools daily, it's the difference between AI that saves you hours and AI that wastes them.
Here's what I've learned from a year of using Cursor, Claude, and other AI tools in real client projects.
The Core Principle: Treat AI Like a Junior Developer
The mental model that's made the biggest difference for me: imagine you're delegating to a junior developer who is extremely capable but knows nothing about your project.
If you say "fix the bug," a junior developer would need more context. Which bug? What's it doing wrong? What's it supposed to do? What have you already tried?
The same applies to AI. Vague instructions produce vague results. Specific instructions with context produce useful output.
Good delegation (to a human or an AI) includes:
What you want done
What context they need to do it well
What constraints to work within
What format you want the output in
The Anatomy of a Good Developer Prompt
Let's look at the difference in practice.
Bad prompt:
Make this function better
Better prompt:
Refactor this JavaScript function to handle the case where the user object is null. It should return an empty array instead of throwing an error. Keep the existing logic for the happy path unchanged. Add a comment explaining the null check.
The second prompt includes:
Specific task: Handle null user object
Expected behaviour: Return empty array, not throw
Constraint: Don't change the happy path
Output detail: Add a comment
Every element of context you add removes a guess the AI has to make. Fewer guesses = more accurate output.
Prompt Patterns That Work for Developers
The Role Pattern
Telling the AI to take a specific role often improves output quality:
"You are a senior TypeScript developer reviewing code for a production Next.js application. Review this function for potential bugs, edge cases, and performance issues. Be specific about what you'd change and why."
Roles work because they prime the model to apply relevant knowledge and standards rather than giving generic responses.
The Step-by-Step Pattern
For complex tasks, ask the AI to break down its thinking:
"Before writing any code, explain your plan for implementing this feature. List the steps you'll take and flag any decisions I should weigh in on before you start."
This prevents the AI from charging ahead and building something that doesn't fit your architecture. It also catches misunderstandings early.
The Constraint Pattern
Always include what you don't want:
"Refactor this component to use hooks instead of class syntax. Do NOT change the prop interface. Do NOT change the styling. Keep all existing test assertions passing."
Negative constraints are as important as positive ones. Without them, the AI will optimise for what it thinks is best — which may not match your requirements.
The Format Pattern
Specify output format explicitly:
"Review this code and give me your response in this format: Summary (one sentence), Issues Found (bulleted list), Recommended Changes (numbered list with code examples for each)."
Structured output is easier to act on than a wall of text.
Prompting in Cursor Specifically
Cursor has two main interfaces: the autocomplete (inline) and the Composer (Cmd+I). They need different prompting approaches.
For inline autocomplete: Context comes from your code. The better your code is named and organised, the better autocomplete works. Clear function names, typed parameters, and descriptive variable names give the model more signal.
For Composer prompts: Be as specific as you would in a Jira ticket. Include:
What file/component you're working in (Cursor often knows, but confirm it)
What the current behaviour is
What the desired behaviour is
Any relevant constraints (don't change the API, don't install new dependencies, etc.)
Example Composer prompt:
"In the
/app/api/checkout/route.tsfile, add error handling for when the Stripe webhook payload is malformed. Currently it throws an unhandled error. It should catch any JSON parse errors, log them with the request body, and return a 400 status with the message 'Invalid payload'. Do not change the success path."
That prompt will produce a precise change. "Add error handling to the checkout route" will produce something that may or may not fit what you actually need.
Prompting Claude for Non-Code Tasks
When I'm using Claude for proposals, documentation, or communication, different patterns apply.
For proposals and documents:
Give Claude the raw material and let it structure:
"Here are my notes from a discovery call with a potential client [paste notes]. Write a professional project proposal for a WordPress website build. Include: project overview, what's included, what's excluded, timeline (2 weeks), pricing (£750), and a single clear CTA to sign off via email. Tone should be professional but friendly, not formal."
For difficult emails:
Give context about relationship and desired outcome:
"I need to tell a client that the project will be delayed by one week. We're 3 weeks in, they've been pleasant to work with, and I want to maintain the relationship. Write an email that explains the delay honestly, takes responsibility, and proposes a revised timeline. Don't over-apologise. Be direct."
What Makes a Prompt Fail
Too vague: "Improve this code" → What does improve mean? Performance? Readability? Error handling?
Missing context: "Fix the bug" → What bug? What environment? What error message?
Conflicting requirements: "Make it faster and more readable and add more features" → These goals can conflict. Prioritise.
Assuming knowledge: "Use our standard pattern" → The AI doesn't know your standard pattern unless you show it.
No format guidance: Long responses that mix code and explanation can be hard to parse. Ask for a specific structure.
Building a Personal Prompt Library
I keep a notes file with prompts that consistently work well for my use cases. Some examples from mine:
Code review: "Review this code as a senior developer. Focus on: edge cases, error handling, and anything that would fail in production. Be specific."
Refactoring: "Refactor this to be more readable without changing behaviour. Prefer clarity over cleverness."
Documentation: "Write JSDoc comments for this function. Include: what it does, parameters with types, return value, and one usage example."
Debugging: "I'm getting [error]. Here's the code [paste]. Explain what's causing it and suggest a fix. If there are multiple possible causes, list them."
Having these ready saves time and produces more consistent results.
Prompt engineering isn't magic. It's just clear communication, applied to a new kind of tool. The developers who get consistently good output from AI tools aren't doing anything mysterious — they're being specific, providing context, and iterating when the first response isn't right.
If you want to see these prompting techniques used in real coding sessions, the @PromptToCode YouTube channel documents exactly that. Real projects, real prompts, real output — not curated demos.
