Agent Consolidation - Best Practices & Testing Strategy
Best practices from empeers and recommendations for organizing testing agents
Date: January 8, 2026
Best Practices from Empeers
1. Universal Agent Contract (UAC)
What it is: Standardized output format for all agents
Structure:
# Universal Agent Contract (UAC)
You are acting as <AGENT_NAME> for the Juniro project.
## Output Requirements
Return a structured response with these sections:
1) Assumptions (explicit, minimal)
2) Decisions (with rationale)
3) Interfaces / Data Structures (schemas, endpoints, UI contracts, etc.)
4) Edge Cases & Failure Modes
5) Acceptance Tests (clear, testable)
6) Open Questions (max 2; prefer 0)
Benefits:
- Consistent output format
- Easier to parse and review
- Clear expectations
- Better documentation
Recommendation: ✅ Adopt - Add UAC to all agent definitions
2. Clear Role Separation
Empeers Model:
- Orchestrator: Product Lead Agent (manages tasks, assigns work)
- Worker: Domain-specific agents (execute tasks)
- Verifier: QA/Reliability Agent (reviews and approves)
Juniro Current State:
- No explicit orchestrator (manual invocation)
- Workers exist (component-builder, endpoint-builder, etc.)
- Verifier exists but is generic
Recommendation: 🟡 Consider - Add orchestration model if you want automated task management
3. Structured Agent Template
Empeers Template:
# <Agent Name>
## Mission
<1-2 sentences>
## Owns (Hard Boundaries)
- ...
- ...
## Inputs
- System Canon: /docs/system-canon.md
- Current PRD: /docs/prd/<feature>.md
- UX Spec (if applicable): /docs/ux/<feature>.md
## Outputs (Required)
- Assumptions
- Decisions
- Interfaces / schemas
- Edge cases
- Acceptance tests
- Open questions (max 2)
## Quality Bar
- Must align to System Canon naming
- Must not invent new domain objects without Domain Model signoff
Benefits:
- Consistent structure
- Clear boundaries
- Explicit inputs/outputs
- Quality standards
Recommendation: ✅ Adopt - Use as template for all new agents
4. Clear Ownership Boundaries
Empeers Approach:
- Each agent has explicit "Owns" section
- Hard boundaries prevent overlap
- Clear conflict resolution rules
Example:
## Owns (Hard Boundaries)
- Endpoint design and request/response schemas
- PostgreSQL migrations + data integrity
- AuthZ enforcement at API layer
- Error handling and status codes
Recommendation: ✅ Adopt - Add "Owns" section to all agents
5. Input/Output Specifications
Empeers Approach:
- Explicit inputs (what agent needs)
- Required outputs (what agent produces)
- Clear dependencies
Recommendation: ✅ Adopt - Add Inputs/Outputs sections
Testing Agents: Shared vs Domain-Specific
The Question
How do we organize testing agents?
- API testing vs Frontend testing vs E2E testing
- Are these shared agents or domain agents?
- How do shared agents work with domain-specific needs?
Analysis
Current Juniro State
| Agent | Location | Purpose | Type |
|---|---|---|---|
@verifier | All repos | Generic verification | ⚠️ Too generic |
@qa-coverage | juniro-design | Test/story coverage | Domain-specific |
@storybook-audit | juniro-design | Storybook quality | Domain-specific |
Gap: No specialized testing agents for API vs Frontend vs E2E
Empeers Approach
| Agent | Purpose | Scope |
|---|---|---|
@qa-reliability | General verifier | Build, lint, types, tests, monitoring |
@ux-testing | UX/UI verifier | Usability, accessibility, visual consistency |
Key Insight: Empeers separates functional testing (QA/Reliability) from UX testing (UX Testing Agent)
Recommended Structure for Juniro
Option A: Shared Base + Domain Variants (Recommended)
Structure:
shared-agents/
├── verifier.md (base definition)
├── verifier-api.md (API-specific variant)
├── verifier-frontend.md (Frontend-specific variant)
├── verifier-e2e.md (E2E-specific variant)
└── verifier-design.md (Design system variant)
How it works:
-
Base definition (
verifier.md) contains:- Common verification protocol
- Universal checks (build, lint, types)
- Standard report format
- Shared mindset/principles
-
Domain variants contain:
- Domain-specific checks
- Repo-specific context
- Specialized test commands
- Domain-specific quality bars
Example: verifier-api.md
# Verifier Agent - API Variant
## Base Definition
See: [verifier.md](./verifier.md) for common protocol
## API-Specific Checks
### Automated Checks
- [ ] `npm run build` passes
- [ ] `npm run lint` passes
- [ ] `npm run typecheck` passes
- [ ] `npm test` passes (unit + integration)
- [ ] OpenAPI spec validates
- [ ] All endpoints have tests
### API-Specific Verification
- [ ] Request validation (Zod schemas)
- [ ] Response format matches OpenAPI spec
- [ ] Error handling consistent
- [ ] Status codes correct
- [ ] AuthZ tested (positive + negative)
- [ ] Database queries optimized
- [ ] Rate limiting works
- [ ] Multi-region rules enforced
### Test Commands
```bash
# API-specific verification
cd juniro-api
npm run test
npm run test:integration
npm run db:test # Test migrations
Quality Bar
- 80%+ code coverage on routes
- All endpoints have integration tests
- OpenAPI spec matches implementation
- Zero P0 security issues
**Example: `verifier-frontend.md`**
```markdown
# Verifier Agent - Frontend Variant
## Base Definition
See: [verifier.md](./verifier.md) for common protocol
## Frontend-Specific Checks
### Automated Checks
- [ ] `npm run build` passes
- [ ] `npm run lint` passes
- [ ] `npm run typecheck` passes
- [ ] `npm test` passes
- [ ] `npm run e2e` passes (if applicable)
- [ ] No console errors
### Frontend-Specific Verification
- [ ] Pages render without errors
- [ ] API integration works
- [ ] Auth flows work
- [ ] Forms validate correctly
- [ ] Navigation works
- [ ] Responsive design verified
- [ ] Dark mode works
- [ ] Accessibility (keyboard nav, ARIA)
### Test Commands
```bash
# Frontend-specific verification
cd juniro-web-public
npm run build
npm run lint
npm run typecheck
npm run test
npm run e2e # Playwright tests
Quality Bar
- Lighthouse score > 90
- All critical user flows tested
- No console errors
- Mobile responsive
- Accessible (WCAG 2.1 AA)
**Benefits:**
- ✅ Shared base reduces duplication
- ✅ Domain-specific needs addressed
- ✅ Easy to maintain
- ✅ Clear separation of concerns
#### Option B: Separate Domain Agents
**Structure:**
domain-agents/ ├── api-verifier.md (API testing) ├── frontend-verifier.md (Frontend testing) ├── e2e-verifier.md (E2E testing) └── design-verifier.md (Design system testing)
**How it works:**
- Each agent is independent
- No shared base
- Full control per domain
**Trade-offs:**
- ❌ More duplication
- ✅ Complete independence
- ✅ No shared dependencies
#### Option C: Single Agent with Context Detection
**Structure:**
shared-agents/ └── verifier.md (smart agent that detects context)
**How it works:**
- Single agent definition
- Agent detects repo type (API vs Frontend)
- Adapts checks based on context
**Trade-offs:**
- ✅ Single source of truth
- ❌ Complex logic in one file
- ❌ Harder to maintain
---
## Recommended Approach: Hybrid (Option A)
### Structure
juniro-docs/docs/agents/ ├── agent-overview.md ├── _TEMPLATE.md ├── shared-agents/ │ ├── verifier.md (base - common protocol) │ ├── verifier-api.md (API variant) │ ├── verifier-frontend.md (Frontend variant) │ ├── verifier-e2e.md (E2E variant) │ ├── verifier-design.md (Design system variant) │ ├── ship-check.md (base) │ ├── ship-check-api.md (if needed) │ ├── ship-check-frontend.md (if needed) │ └── ... └── domain-agents/ ├── component-builder.md ├── endpoint-builder.md └── ...
### Implementation Pattern
**Base Definition (`verifier.md`):**
```markdown
# Verifier Agent (Base)
## Mission
Prove that claimed work is actually complete. Skeptical validator.
## Common Protocol
[Shared verification steps, mindset, report format]
## Domain Variants
- [API Variant](./verifier-api.md) - For juniro-api
- [Frontend Variant](./verifier-frontend.md) - For web repos
- [E2E Variant](./verifier-e2e.md) - For full-stack testing
- [Design Variant](./verifier-design.md) - For juniro-design
Domain Variant (verifier-api.md):
# Verifier Agent - API Variant
## Base Definition
See: [verifier.md](./verifier.md) for common protocol and mindset.
## API-Specific Extensions
[Domain-specific checks, commands, quality bars]
Repo .mdc File:
---
name: verifier
description: Skeptical Validator for API
---
# Verifier Agent
See full definition: ../juniro-docs/docs/agents/shared-agents/verifier-api.md
## Quick Reference
[Key commands, common checks]
Benefits
- Shared Base: Common protocol, mindset, report format
- Domain-Specific: API vs Frontend vs E2E needs addressed
- Maintainable: Update base once, affects all variants
- Discoverable: Clear structure, easy to find
- Flexible: Can add new variants without breaking existing
Testing Agent Examples
Example 1: API Testing Agent
Location: shared-agents/verifier-api.md
Focus:
- Request/response validation
- OpenAPI spec compliance
- Database queries
- AuthZ testing
- Error handling
- Performance (latency, throughput)
Test Commands:
npm run test # Unit tests
npm run test:integration # Integration tests
npm run db:test # Database tests
npm run lint
npm run typecheck
Example 2: Frontend Testing Agent
Location: shared-agents/verifier-frontend.md
Focus:
- Component rendering
- User interactions
- API integration
- Auth flows
- Responsive design
- Accessibility
- Performance (Lighthouse)
Test Commands:
npm run build
npm run lint
npm run typecheck
npm run test # Unit tests
npm run e2e # Playwright E2E tests
Example 3: E2E Testing Agent
Location: shared-agents/verifier-e2e.md
Focus:
- Full user flows
- Cross-browser testing
- Mobile testing
- Performance (real user metrics)
- Integration between frontend and API
Test Commands:
npm run e2e # Full E2E suite
npm run e2e:mobile # Mobile-specific
npm run e2e:cross-browser # Cross-browser
Example 4: Design System Testing Agent
Location: shared-agents/verifier-design.md
Focus:
- Component quality
- Storybook coverage
- Design system compliance
- Accessibility
- Visual regression
Test Commands:
npm run test # Component tests
npm run storybook # Storybook
npm run chromatic # Visual regression
Decision Matrix
| Approach | Duplication | Maintainability | Flexibility | Complexity |
|---|---|---|---|---|
| Option A: Shared Base + Variants | Low | High | High | Medium |
| Option B: Separate Agents | High | Medium | High | Low |
| Option C: Single Smart Agent | None | Low | Low | High |
Recommendation: ✅ Option A - Best balance of maintainability and flexibility
Implementation Plan
Phase 1: Create Structure
- Create
juniro-docs/docs/agents/shared-agents/ - Create base
verifier.mdwith common protocol - Create domain variants (
verifier-api.md,verifier-frontend.md, etc.)
Phase 2: Migrate Existing Agents
- Extract common content from existing
verifier.mdcfiles - Create domain-specific variants
- Update repo
.mdcfiles to reference centralized docs
Phase 3: Apply Pattern to Other Agents
- Apply same pattern to
@ship-check - Apply to
@design-auditif needed - Document pattern for future agents
Summary
Best Practices to Adopt from Empeers
- ✅ Universal Agent Contract (UAC) - Standardized output format
- ✅ Structured Template - Consistent agent structure
- ✅ Clear Ownership - Hard boundaries, explicit "Owns" section
- ✅ Input/Output Specs - Explicit dependencies and deliverables
- 🟡 Orchestration Model - Consider if you want automated task management
Testing Agent Strategy
Recommended: Shared Base + Domain Variants
- Base definition: Common protocol, mindset, report format
- Domain variants: API, Frontend, E2E, Design-specific checks
- Repo
.mdcfiles: Lightweight references to centralized docs
Benefits:
- Reduces duplication (80-90% shared content)
- Addresses domain-specific needs
- Easy to maintain
- Clear structure
Next Steps: Review this approach and decide on implementation plan.