Most developers approach AI coding with the wrong mental model. The tendency, exacerbated by the vibe coding trend, is to let AI take the wheel. “Build me a UI using Tailwind that lets users see their account info” is an example initial request, followed by more and more short (and curt) instructions.
The result? Hallucinations, slop, and edge cases missed. But by taking a more strategic approach to working with AI coding, you can achieve far better results and move from producing programming to producing products.
The first way to do this is through better tactical prompting. By seeing prompting as a new developer discipline, you can significantly improve your results from AI coding. Just like developers follow principles such as DRY and SOLID, prompting should be approached with the same rigor they bring to traditional code: designing, refining, and optimizing for clear outputs.
The second is to understand that LLMs shine when treated like a senior teammate: someone who can think with you, given the right context and room to reason. This simple shift in mindset–framing AI as a senior developer rather than a junior one–enables better prompting, better outputs, and more robust systems.
This approach transforms AI from a code generator into a programming partner capable of exploring architectural decisions, suggesting testing strategies, and identifying potential pitfalls before they become production issues. You’ll find yourself spending less time fixing AI-generated mistakes and more time leveraging its capabilities to solve genuinely challenging problems.
So, what does it mean to treat AI like a senior developer?
Give Your AI A PRD
Good developers don’t start coding without understanding the problem. Senior engineers know that requirements gathering is the foundation of quality code. So why are we giving AI one-line prompts and expecting production-ready results?
Writing a clear, structured task description, like a PRD, fundamentally changes how AI responds to your requests. When you include context, examples, constraints, and expected outcomes, you transform vague ideas into actionable specifications. This approach mirrors how experienced developers communicate with each other–through comprehensive requirements that leave little room for misinterpretation.
Example
TASK: Build a Currency Conversion API Service
PROJECT CONTEXT:
- Part of our global payment processing platform
- Will be used by both our web and mobile applications
- Expected traffic: ~5,000 conversions per hour
- Must comply with financial regulations in EU, US, and APAC regions
USER STORIES:
- As a user making an international payment, I want to see the exact amount I’ll pay in my local currency before confirming.
- As a finance team member, I need accurate conversion records for reconciliation and auditing.
- As a product manager, I need conversion rates to update at least daily to ensure accuracy.
TECHNICAL REQUIREMENTS:
- Backend function for REST API endpoint
- Use the official European Central Bank (ECB) API as the data source for rates
- Cache exchange rates (max 1 hour TTL) to reduce external API calls
- Implement proper error handling with descriptive messages
- Include comprehensive logging for debugging and audit trails
INPUTS:
- amount: Decimal number (required)
- from_currency: ISO 4217 three-letter code (required)
- to_currency: ISO 4217 three-letter code (required)
- reference_date: ISO 8601 date format (optional, defaults to current date)
OUTPUTS:
- JSON response with:
- original_amount: The input amount
- converted_amount: The converted amount (rounded to 2 decimal places)
- exchange_rate: The rate used for conversion
- from_currency: Original currency code
- to_currency: Target currency code
- timestamp: ISO 8601 timestamp of when conversion was performed
- reference_date: The date for which the exchange rate was retrieved
ERROR HANDLING:
- Invalid currency codes: Return clear error identifying the invalid code
- ECB API unavailable: Fallback to cached rates if available, otherwise return service unavailable
- Invalid amount format: Return validation error with example of correct format
DEPLOYMENT ENVIRONMENT:
- AWS Lambda with Node.js 18 runtime
- Redis for caching layer
This level of detail gives the AI a comprehensive understanding of what you’re building, why you’re building it, and how it should behave. You can even use a service such as ChatPRD to create the initial design doc.
Allow the AI to Ask Questions
If something’s unclear, a senior developer will ask questions before writing broken code. They know that clarification of requirements upfront saves hours of rework later. Your AI assistant can do the same with the right prompt structure.
By adding something as simple as “You may ask clarifying questions before coding” to your prompt, the interaction goes from a one-way instruction to a collaborative conversation. The model can then debug the requirements, not just the code. This leverages an LLM’s analytical abilities at the most valuable stage of development.
Example
TASK: You are helping implement a new user authentication system. Before you begin coding, identify anything that’s unclear in these requirements. Ask specific questions that would help clarify the implementation details, dependencies, or potential issues.
REQUIREMENTS:
- Implement a user authentication system with password and 2FA support
- Should work with our existing PostgreSQL database
- Needs to be secure and follow best practices
- Should support social login options
- Must be scalable as we grow
- Implement password reset functionality- Session management should be robust
Here’s what Claude asks if we put that task to it:
I’ll help you clarify the authentication system requirements. Here are specific questions that would help establish clearer implementation details:
Database integration:
- What user data schema do you currently have in PostgreSQL?
- Are there existing authentication tables we need to integrate with?
- Will we need to migrate existing user accounts?
2FA implementation:
- What types of 2FA are required? (SMS, email, authenticator apps, hardware keys?)
- Should 2FA be mandatory or optional for users?
- Are there compliance requirements (like NIST) we need to follow for 2FA?
Social login specifics:
- Which social login providers are required? (Google, Facebook, Apple, etc.)
- How should we handle account linking if a user signs up with email first, then later uses social login?
- What user data should we retrieve from social providers?
Password policies:
- What are the specific requirements for password complexity?
- What is the expected password reset flow? (Email link, security questions, etc.)
- Should we implement password expiration policies?
Session management:
- What is the expected session duration?
- Do you need support for concurrent sessions across devices?
- Should sessions persist across browser restarts?
- How should we handle session invalidation during password changes?
Scalability expectations:
- What is your current user base size and expected growth rate?
- Are there peak load times we should design for?
- Is geographical distribution of users a consideration?
Security requirements:
- Are there specific compliance standards we need to meet? (SOC2, GDPR, HIPAA, etc.)
- How should we handle rate limiting for login attempts?
- What logging requirements do you have for authentication events?
- Is there a need for IP-based restrictions or anomaly detection?
Application architecture:
- What is the front-end technology we’re integrating with?
- Will this be a stateless API or maintaining server-side sessions?
- Is this for a single application or should it serve as an SSO for multiple applications?
Clarifying these points would help create a more precise implementation plan with appropriate technology choices and development priorities.
Answering all those questions will probably take the entire context window, even before you get to code. But by encouraging AI to ask these detailed questions first, you get a much clearer picture of potential issues and edge cases. You can then work iteratively with the AI to hone the idea and ensure the most critical considerations aren’t missed.
Iterate With the AI
Senior engineers build iteratively. Just as above, we’re talking about back-and-forth with the AI. You can also allow the model to iterate with itself. You can ask it to start by thinking about architecture, then implement components, then refine.
Breaking work into chunks—first plan, then generate, then refactor—mirrors the development workflow of experienced engineers who wouldn’t dream of writing an entire application in one sitting. By requesting architecture diagrams, component breakdowns, or pseudocode first, you establish a solid foundation that guides subsequent development. Similarly, explicit refactoring requests after initial implementation help polish rough edges into production-quality code.
Example
TASK: We need to build a data processing pipeline that ingests CSV files of customer transactions, cleanses the data, identifies fraudulent patterns, and generates daily reports. Let’s work through this iteratively.
PHASE 1: ARCHITECTURE PLANNING
First, outline the high-level architecture for this pipeline. Include:
- Components and their responsibilities
- Data flow between components
- Technologies/libraries you would recommend- Key considerations for scalability, reliability, and maintainability
PHASE 2: COMPONENT DESIGN
Now that we’ve agreed on the high-level architecture, let’s design the data ingestion component in more detail:
- Input validation approach
- Error handling strategy
- Parsing and normalization logic
- Performance considerations- Potential failure modes and mitigations
PHASE 3: IMPLEMENTATION
Implement the data ingestion component based on our agreed design:
- File loading and validation
- CSV parsing with proper error handling
- Data normalization functions- Unit tests for the component
PHASE 4: REFACTORING
Review the implementation and suggest improvements:
- Code organization and structure
- Performance optimizations
- Error handling enhancements- Additional edge cases to handle
PHASE 5: EXTENSION
Now that we have a solid ingestion component, outline how we would implement the fraud detection component that will process the normalized data.
This staged approach transforms the AI from a code generator into a collaborative pair programmer that builds solutions methodically.
AI Can Explain Its Decisions
Good engineers explain trade-offs, assumptions, and logic. They communicate the reasoning behind their choices. This transparency is key to effective collaboration and code quality.
Prompting LLMs to explain their approach creates a checkpoint in the development process. You gain insight into how they interpret requirements and plan to solve the problem. This pre-implementation explanation reveals the model’s model, allowing you to catch misunderstandings before they manifest as bugs.
Example
TASK: We need to implement a caching layer for our product catalog API to reduce database load and improve response times. Before writing any code, explain:
- Your recommended caching strategy and why it’s appropriate for this use case
- Key technical decisions and trade-offs you’re making
- Potential challenges and how you plan to address them
- Assumptions you’re making about the current system5. Metrics you would track to validate the solution’s effectiveness
After explaining your approach, implement the caching layer with detailed comments explaining your implementation choices.
CONTEXT:
- Product catalog contains ~50,000 items
- Items are updated infrequently (1-2% change daily)
- API serves ~2 million requests per day
- Peak traffic is 5x average
- Current response time averages 300ms, target is <100ms
- Some product data is personalized based on user location/preferences
- Running on Kubernetes with Redis available as a caching option- Each product has ~20 attributes, average response size is 5KB
Here is the output:
Product Catalog API Caching Strategy
- Recommended Caching Strategy: I recommend implementing a multi-level caching strategy with both Redis and application-level caching:
- Primary Cache: Redis for distributed caching with TTL (Time-To-Live)
- Secondary Cache: In-memory application cache for the most frequently accessed products
- Cache Invalidation: Event-based invalidation combined with TTL as a fallback
This hybrid approach is appropriate because:
- The product catalog has infrequent updates (only 1-2% daily change)
- The high request volume (2M/day) will benefit significantly from caching
- Having both distributed and local caches handles peak traffic (5x average) effectively
- The product catalog has infrequent updates (only 1-2% daily change)
- The high request volume (2M/day) will benefit significantly from caching
- Having both distributed and local caches handles peak traffic (5x average) effectively
Key Technical Decisions & Trade-offs
- Cache Granularity: Individual products vs. collections
- Decision: Cache both individual products and common collection queries
- Trade-off: More memory usage but better performance for browse scenarios
- Cache Keys Design:
- Decision: Composite keys incorporating product ID, user location, and preference hash
- Trade-off: More complex key management but enables personalization
- Invalidation Strategy:
- Decision: Publish-subscribe model with Redis channels for real-time invalidation
- Trade-off: Added complexity but ensures consistency
- Cache Warming:
- Decision: Implement a proactive cache warming strategy for popular products
- Trade-off: Additional system overhead but prevents cold cache issues
- Compression:
- Decision: Apply compression for cached data
- Trade-off: Slight CPU overhead vs. significant memory savings
- …
It continues. If you are interested, the final implementation the LLM suggested is in this gist:
<script src=”https://gist.github.com/ajtatey/df21eb278e32afd95123fbd2ca99c36c.js”></script>
By forcing the AI to think through the solution holistically before writing code, you make problems visible before implementation begins.
AI Can Say “I Don’t Know”
Good engineers flag unknowns and admit limitations. They recognize that acknowledging uncertainty is a strength, not a weakness.
Prompting the LLM to annotate guesses or assumptions creates a layer of transparency in your AI code. Just as senior developers distinguish between verified facts and educated guesses in their work, this practice helps you separate reliable outputs from speculative ones. When the model clearly marks areas of uncertainty, you can focus additional verification efforts where they’re most needed. This targeted validation process mirrors how experienced teams manage risk by concentrating scrutiny on the least certain components.
Example
TASK: You’re designing a recommendation algorithm for an e-commerce platform. Based on the information provided:
- Propose a suitable recommendation approach
- Outline how you would implement it
- Identify any areas where you’re uncertain or would need more information
- Clearly mark any assumptions you’re making
CONTEXT:
- Platform sells fashion items (clothing, accessories)
- Has approximately 200,000 active monthly users
- Database contains 2 years of purchase history
- We have user browsing data and purchase history
- Currently no recommendation system in place
- Tech stack includes PostgreSQL, Python backend, React frontend
After your solution, explicitly list any parts where:
- You’re making educated guesses rather than confident assertions
- You would need additional information to make a better recommendation- There are multiple valid approaches and you’re unsure which is optimal
The model returns the implementation proposal, but appends this to the end:
## Uncertain Areas and Information Needs
### Educated Guesses Rather Than Confident Assertions
- The optimal weighting between collaborative filtering, content-based filtering, and trend-based recommendations (currently proposed as 60/30/10)
- The ideal frequency for model retraining (suggested daily, but could be weekly)
- The impact of seasonality on this specific fashion platform’s recommendations
- The degree to which visual similarity (vs. metadata similarity) drives purchases
### Additional Information Needs
- Conversion rates and funnel metrics to establish a baseline
- Details about product taxonomy and attributes
- Information about customer segments and their purchasing behaviors
- Returns data and reasons for returns
- Marketing calendar and promotional events that might affect recommendation relevance
- Geographic distribution of users (for seasonal adjustments)
- User demographics to better understand preference patterns
- Inventory constraints that might affect recommendation strategy
### Multiple Valid Approaches
- Model Architecture:
- Matrix factorization vs. neural collaborative filtering vs. graph-based methods
- Explicit vs. implicit feedback emphasis
- Batch learning vs. online learning
- Cold Start Handling:
- Content-based only vs. popularity-based vs. quick questionnaire
- Category-based defaults vs. trending items
- Recommendation Diversity:
- How to balance similarity with discovery
- Whether to explicitly model diversity or let it emerge naturally
- Deployment Strategy:
- Full system at once vs. incremental rollout by page/feature- Single model vs. ensemble of specialized models for different contexts
By encouraging the AI to identify uncertainties and assumptions, you get a much more honest assessment of its solution and can focus on resolving the most important open questions.
AI Should Give You Maintainable, Idiomatic Code
It’s not just about working—it’s about doing it the right way. Senior engineers write code that follows best practices, conforms to team standards, and can be maintained by others.
Just as experienced developers prioritize readability and maintainability over clever hacks, explicitly prompting for idiomatic code ensures you receive solutions that integrate with existing codebases. This approach yields code that your team can understand, extend, and maintain long after the initial implementation.
Example
Here, we ask not just for TypeScript, but a specific version, following a particular style guide, and with specific functionality:
TASK: Implement a service in TypeScript that fetches and processes user activity data. The service will be part of a larger application that tracks user engagement metrics.
REQUIREMENTS:
- The service should fetch user activity data from a REST API
- It should process the data to calculate engagement metrics
- It should cache results to minimize API calls
- It should handle errors gracefully with proper logging
CODING STANDARDS:
- Use modern TypeScript (v4.5+) with strict type checking
- Follow Airbnb style guide conventions
- Use functional programming patterns where appropriate
- Prefer immutability and pure functions
- Use async/await for asynchronous operations
- Include comprehensive error handling
- Add complete JSDoc documentation for public methods
- Write code that follows SOLID principles
- Include unit tests using Jest
TECHNICAL CONTEXT:
- The application uses Axios for HTTP requests
- We use Redis for caching through the ‘ioredis’ client
- Logging is done via Winston logger
- The application runs in a Node.js environment
- TypeScript configuration has strict mode enabled
- Please provide a complete, production-ready implementation with proper structure, error handling, and tests. Include explanatory comments and documentation.
This simple prompt produced so much code that we can’t add it all here. Instead, this repo shows the service, types, factory function, tests, and docs. Claude produced everything from this single prompt (do you notice the obvious mistake?).
By specifying detailed coding standards and patterns, you get idiomatic code that follows team conventions and best practices, making it much more maintainable and reliable.
Build a Better Mental Model
You get out what you put in. If you treat AI like a junior developer, you get rushed solutions that miss critical details, create technical debt, and require extensive rework. The AI will follow instructions literally without raising concerns, leading to exactly the hallucinations, slop, and edge cases described earlier.
But if you treat it like a senior teammate–setting context, reviewing work, iterating, and asking for deeper thinking–you’ll tap into LLMs’ incredible potential.
This is a little more work for you, but it will yield much better results. The key insight is that AI isn’t magic—it’s a tool that works best with structured input and collaborative iteration. Treating AI like a senior developer rather than a junior one sets it up for success and maximizes its value to your development process.
Neon is a serverless Postgres database used by v0 and Replit Agent. It also works like a charm with AI IDEs like Cursor via its MCP Server. Sign up for Neon (we have a Free Plan) and start building.