AI Development
This is the guideline for using AI-powered development tools in our workflow. The goal is to make every developer faster and more effective by integrating AI into daily development. AI handles the repetitive and mechanical parts so you can focus on architecture, design, and creative problem-solving.
Our principle: write as little code by hand as possible. Let AI agents do the writing. Your job is to think, design, review, and decide — not to type code. The less code you write manually, the more time you spend on what actually matters: solving the right problems the right way.
What the Agent Can Do
- Search the codebase - Find files, functions, patterns, and dependencies across the repo.
- Generate and refactor code - Write new code or improve existing code following project conventions.
- Run tests - Execute unit, integration, and end-to-end tests to validate changes.
- Generate documentation - Create and update docs based on code changes.
- Create pull requests - Open PRs with proper descriptions, linked tickets, and review assignments.
Workflows
Follow these playbooks to get consistent, high-quality results from AI agents.
Feature Development
- Provide the agent with the ticket, acceptance criteria, and relevant design docs.
- Let the agent explore the repo to understand existing patterns and architecture.
- Have the agent propose an implementation plan (files to modify, new files, testing strategy).
- Review and approve the plan before any code is written.
- Agent implements changes incrementally, running tests after each step.
- Agent writes unit and integration tests for the new feature.
- Agent creates a pull request with a clear description and ticket link.
Bug Fix
- Provide the agent with the bug report, steps to reproduce, and expected vs actual behavior.
- Agent searches the codebase and identifies the root cause.
- Agent writes a failing test that reproduces the bug.
- Agent implements the minimal fix.
- Agent verifies the fix passes and runs the full test suite.
- Agent creates a pull request referencing the bug ticket.
How to Write Good Agent Tasks
The key mindset shift: instead of writing code yourself, learn to design clear tasks for the agent.
Before (Manual)
1. Read the ticket2. Search the codebase manually3. Write code in your editor4. Run tests manually5. Fix issues, repeat6. Create PR manuallyAfter (Agent-Assisted)
1. Read the ticket2. Give the agent the ticket + context3. Review the agent's plan4. Let the agent implement + test5. Review the output6. Agent creates the PRGuidelines
- Be specific. “Add a REST endpoint for user preferences following our patterns in
/src/api/” beats “Add user preferences.” - Provide context. Point the agent to relevant files, docs, and examples.
- Break it down. Split large features into smaller, independent tasks.
- Review plans before code. Catch architectural issues early.
- You are still responsible. Always review the agent’s output before merging. The agent is a tool, not a replacement for your judgment.
Setting Up Your Repository
To get the best results from AI agents, your repo should include structured documentation the agent can reference.
Required Docs
/docs/ architecture.md # System architecture, design decisions, component relationships coding-standards.md # Code style, naming conventions, patterns to follow/avoid api-patterns.md # API design, request/response formats, error handling prd.md # Product requirements, user stories, acceptance criteria domain.md # Domain glossary, business logic, entity relationships tech-debt.md # Known tech debt, workarounds, areas to improve testing-strategy.md # Testing conventions, what to test, coverage expectations environment.md # Environment setup, env vars, third-party service configs/agent-rules.md # Agent-specific rules: what to do, what to avoid, constraintsarchitecture.md- System architecture, major components, data flow, and key design decisions.coding-standards.md- Code style, naming conventions, preferred patterns, and anti-patterns.api-patterns.md- API design patterns, authentication, request/response formats, and error handling.prd.md- Product requirements document. User stories, acceptance criteria, feature scope, and business goals. Helps the agent understand what to build and why.domain.md- Domain glossary and business logic. Defines key terms, entity relationships, and domain rules so the agent uses correct naming and understands business constraints.tech-debt.md- Known technical debt, workarounds, and legacy patterns. Warns the agent about fragile areas, deprecated approaches, and planned migrations to avoid compounding debt.testing-strategy.md- Testing conventions and expectations. What to unit test vs integration test, coverage targets, mocking guidelines, and test file structure.environment.md- Environment setup guide. Required env vars, third-party service configurations, local development setup, and deployment targets.agent-rules.md- Project-specific rules for AI agents: constraints, things to avoid, preferred libraries.
Shared Prompt Library
We maintain reusable prompt templates in a shared /ai-prompts/ directory:
/ai-prompts/ feature-development.md # Building new features bug-fix.md # Debugging and fixing bugs code-review.md # Reviewing pull requests refactoring.md # Refactoring existing code test-writing.md # Writing tests documentation.md # Generating documentationEach template includes context requirements, step-by-step instructions, expected output format, and examples.
Treat the prompt library like shared code: review changes, keep templates current, and remove outdated ones. Everyone is encouraged to contribute.