ChatGPT Prompts for Coding
AI coding assistants are only as good as the context you give them. Pasting raw code with no explanation gets you generic suggestions. These prompts show you how to frame your language, framework, team skill level, and constraints so you get responses that are actually usable — whether you're debugging production issues, reviewing a PR, or making an architecture call.
Example Prompts
Code review with security focus
You are a senior software engineer with a strong background in application security. I need you to review the following code with two goals: (1) identify any security vulnerabilities, and (2) flag code quality issues that could become maintainability problems at scale. Context: - Language: Python 3.11 - Framework: FastAPI - This is a user authentication endpoint that handles login and issues JWTs - This code will be reviewed by junior engineers after, so please explain why each issue matters, not just what the issue is - We use Postgres via SQLAlchemy ORM Here is the code: [PASTE YOUR CODE HERE] For each issue you find, structure your response as: 1. Severity: Critical / High / Medium / Low 2. Location: function name or line reference 3. Issue: what's wrong 4. Why it matters: one sentence on the real-world risk 5. Fix: the corrected code snippet After listing all issues, give me an overall assessment (2–3 sentences) of the code's security posture and one recommendation for a structural improvement.
Debugging session with full error context
You are a senior backend engineer helping me debug a production issue. I will give you the full context — please do not ask me clarifying questions if you can form a reasonable hypothesis from what I've provided.
Environment:
- Node.js 20, Express 4.18
- PostgreSQL 15 via pg (node-postgres)
- Deployed on Railway, Node environment: production
The problem:
Users are intermittently getting a 500 error when submitting a multi-step form. The error does not happen in development. It happens roughly 1 in 20 requests in production and is not reproducible with the same data locally.
Error from logs:
"Error: Connection terminated unexpectedly
at Connection.parseE (/app/node_modules/pg/lib/connection.js:612:11)"
What I've already tried:
- Checked that env vars are set correctly in Railway — they are
- Confirmed the query works in isolation via a database GUI
- Added connection pool logging — pool size is set to 10, rarely goes above 3 concurrent connections
Here is the relevant database query function:
[PASTE YOUR FUNCTION HERE]
Walk me through your diagnostic reasoning. Give me your top 2–3 hypotheses ranked by likelihood, and for each one: what evidence supports it, how to confirm it, and how to fix it if confirmed.Architecture decision: microservices vs. monolith
You are a software architect who has led engineering teams at both early-stage startups and scaling companies. I need help thinking through an architecture decision — I want your honest recommendation, not a "it depends" non-answer. My situation: - B2B SaaS product: a compliance workflow tool for HR teams - Current state: Rails monolith, 3 years old, ~95k lines of code - Team: 6 engineers (4 backend, 2 frontend), no dedicated DevOps - Scale: 800 customers, ~4M requests/day, no major performance issues yet - Pain points with current architecture: a few God objects in the domain model, one critical background job processor that is fragile and coupled to the main app, slow test suite (14 minutes) - Business context: we just closed a Series A and will hire 3 more engineers in Q3 The question: should we start extracting microservices, or double down on improving the monolith? Structure your answer as: 1. Your recommendation (one clear sentence) 2. The 3 most important reasons for your recommendation 3. The strongest argument for the opposite choice — and why you're setting it aside 4. The highest-risk assumption in your recommendation 5. What I should do in the next 30 days regardless of which direction I choose
Explain complex code for junior developers
You are a senior engineer and technical mentor. Your goal is to explain the following code to a junior developer who understands basic JavaScript (variables, functions, loops) but has not worked with async/await patterns, closures, or higher-order functions before. Code to explain: [PASTE YOUR CODE HERE] Guidelines for your explanation: - Do not use the phrase "essentially" or "simply" — they are condescending - Use a real-world analogy for any abstract concept (closures, callbacks, Promises) - Walk through the code line by line where it's dense, but summarize straightforward sections - After the explanation, add a short "what could go wrong" section: 2–3 common mistakes a junior dev might make when modifying this code - End with one question the junior dev could try to answer themselves to test their understanding (include the answer below, hidden in a note) The goal is genuine understanding, not just "it works." Assume this developer will need to modify this code in 3 months without you around.
Write unit tests for a function
You are a senior software engineer specializing in test-driven development. Write a comprehensive unit test suite for the following function. Language and testing framework: TypeScript with Vitest Function to test: [PASTE YOUR FUNCTION HERE] Requirements for the test suite: 1. Happy path tests: cover the expected behavior with representative inputs 2. Edge cases: empty inputs, boundary values, null/undefined where applicable 3. Error cases: invalid inputs that should throw, and what they should throw 4. At least one test that documents a non-obvious behavior of this function (something a future dev might not expect) For each test: - Use descriptive test names in the format: "should [expected behavior] when [condition]" - Add a one-line comment above any non-obvious test explaining why it exists - Do not mock what you don't have to — only mock external dependencies (API calls, DB, file system) After the tests, write a brief coverage note (3–4 sentences) explaining what you chose not to test and why.
Tips for Coding Prompts
Always specify your language version, framework, and any libraries involved. "Python" and "Python 3.11 with FastAPI" produce very different code quality outputs. Models have training data across multiple versions of frameworks and will default to common or older patterns if you don't anchor them. For debugging especially, include your deployment environment — a bug that doesn't reproduce locally usually has a production-specific cause.
For architecture and design questions, explicitly ask for a concrete recommendation rather than a list of tradeoffs. Models will hedge toward "it depends" by default because that's the safe answer. Adding "give me your honest recommendation, not an 'it depends'" forces the model to commit to a position, which is far more useful for making an actual decision.
When asking for code explanations, specify who the target audience is and what they already know. A junior developer who understands loops but not closures needs a completely different explanation than a mid-level dev who's new to a specific pattern. Without this context, models tend to over-explain obvious things and under-explain the genuinely confusing parts.
Why use PromptBro for coding prompts?
PromptBro's voice-first flow is especially useful before a debugging session — speak the problem out loud, describe what you've already tried, and PromptBro structures it into a prompt with all the context an AI needs to give you a useful diagnostic response. No more realizing halfway through a chat that you forgot to mention your framework version or deployment environment.
Try PromptBro free — build your first prompt in 60 seconds →