VOL. X · NO. 1NORFOLK, VIRGINIA

The Eckhardt Tribune

"All the News That's Fit to Ship"

LATE CITY EDITIONFEB 9, 2026
Back to All Dispatches
TECHNOLOGY|BUILD LOG

I SHIPPED A FULL PRODUCT IN 4 DAYS WITH AI AGENT TEAMS

Repcheck went from zero code to live in production with auth, profiles, trust scores, a Discord bot, eBay import, and social verification. Here's how AI agents made it possible.

By RICKY ECKHARDT

|

Last week I shipped Repcheck - a portable reputation platform with authentication, profile pages, trust score calculations, a Discord bot with AI sentiment analysis, eBay feedback import, social verification, review request links, a dashboard, and a settings page.

Four days. One founder. No team.

This isn't a "look how smart I am" post. It's a "the tools changed and nobody updated their priors" post.

The Setup

Repcheck is part of the Nerdbeak ecosystem - I've been building specialized collectibles marketplaces (Nerdworth, Sugrworth) on a shared backend called Nerdbase. Repcheck is the trust layer that ties everything together.

The stack: Next.js 15, Prisma, PostgreSQL on Supabase, Tailwind, shadcn/ui. Standard stuff.

What wasn't standard was the development process.

16 Agents, 12 Skills

I set up Claude Code with 16 specialized AI agents. Not one general-purpose assistant - sixteen agents with distinct roles:

  • Backend dev for server actions, services, database queries
  • Frontend dev for React components, pages, layouts
  • Security reviewer for auth, data exposure, trust score integrity
  • Code reviewer for architectural compliance
  • Integration agents for external platform connections
  • Growth lead for GTM work
  • Content lead for blog posts and social
  • Product lead for feature planning and architecture audits

Each agent has its own system prompt, tools, and constraints. The backend dev can write to files. The product lead can only read. The security reviewer checks every API route for data leaks.

On top of the agents, 12 skills handle recurring workflows: standup briefings, build pipelines, integration research, metrics checks.

The Build Timeline

Day 1 (Feb 6): Set up the agent system. Audited the existing codebase. Ran the first standup. Identified what needed to ship.

Day 2 (Feb 7): This is where it got interesting. I ran three agent teams in parallel:

  • Team 1 built the Discord bot end-to-end: webhook API with HMAC auth, sentiment classification using Claude Haiku, slash commands, Rich embeds
  • Team 2 built social verification: schema, service layer, API routes, settings UI
  • Team 3 built eBay feedback import: cheerio scraper, verification service, self-reported to verified flow

Three features shipping simultaneously. Each team working autonomously while I reviewed outputs.

Also shipped that day: auth-aware header, profile setup flow, guest review polish, profile trust display with SVG score ring, and a Discord landing section. The git log from Feb 7 has 23 items.

Day 3 (Feb 8): Dashboard with share links, QR codes, email invites. Rebranded all user-facing copy. Fixed the review count query. Production tested everything in the browser.

Day 4 (Feb 9): Fixed a mobile layout issue. Ran standup. Fixed a security issue where the profile API was leaking user emails. Shipped content drafts and outreach materials.

What I Learned

Parallel Teams Are the Unlock

Running agents sequentially is 3x slower than running them in parallel on independent features. If two features don't share files, they can ship simultaneously. This is obvious in hindsight but I didn't think AI agents could do it reliably. They can.

State Files Keep Everything Grounded

I maintain JSON state files for business metrics, roadmap, signals, integrations, and retrospectives. Every standup reads them. Every decision references them. Without this, the agents hallucinate priorities.

The state files go stale fast when you're shipping this quickly. Update them after every session or the next standup will be working from fiction.

The AI Doesn't Replace Judgment

I still made every architectural decision. Which auth system. How trust scores calculate. Whether the Discord bot should handle BST or stay focused on reputation. What data the API should expose.

The agents execute. The founder decides.

Haiku for Classification Costs Nothing

The Discord bot uses Claude Haiku to classify review sentiment (positive, negative, neutral). At 100 reviews per day, this costs roughly a dollar per year. AI classification that would have required a training pipeline two years ago is now an API call that costs fractions of a cent.

The Uncomfortable Part

Here's what I keep thinking about: this would have taken a team of 3-5 engineers at least two months to build. Auth, profiles, reviews, trust score algorithms, a Discord bot, eBay scraping, social verification, dashboard, settings - that's a real product with real complexity.

I built it in four days because I had AI agents that could hold context, follow architectural patterns, and ship code that passes type checks and builds clean.

I'm one founder. I have no employees. And I just shipped a product that works.

That's either exciting or terrifying depending on which side of it you're on.

What's Next

The product is live at repcheck.me. The Discord bot is deployed. Now comes the hard part that AI can't do for me: getting real humans to use it.

Distribution is a founder problem. AI agents can write pitch messages and research Discord servers, but they can't DM a server admin and convince them to install a bot. They can't build relationships in trading communities.

The building phase is done. The selling phase starts now.


Try Repcheck: repcheck.me

Follow along: Twitter | Repcheck

— 30 —