← back to portfolio
Personal Use · Production · 2024

career OS

6-agent job search automation — from 500+ daily positions to 5 perfect matches

View Code📚 Documentation

problem

employed job search wastes 20+ hrs/wk. Manual processes miss 90% of relevant positions due to keyword limitations and time constraints.

solution

sourcing → dedupe → fit scoring → comp sanity → draft outreach; human-in-the-loop at critical decision points.

~20hr
saved/week
~95%
noise reduction
6 weeks
measured period

architecture

6-agent system:

  • scraper: crawls 50+ sites with rate limiting
  • filter: scores 0-100% match with weighted criteria
  • enricher: gathers company intel from multiple sources
  • applier: generates customized materials
  • tracker: manages status in Google Sheets
  • reviewer: human approval before submission

design & process reflection

built a multi-agent job search automation system that balances efficiency with authenticity, demonstrating how ai amplifies rather than replaces human judgment.

key pm challenges & decisions

the personalization paradox

discovered llms default to caution, generating generic content. solution: built 500+ line narrative database with explicit context, enabling authentic personalization while maintaining 70% automation efficiency.

multi-agent vs monolithic

chose specialized agents over monolithic architecture for independent testing, clear separation of concerns, and graceful degradation—critical for production reliability.

the 70% automation sweet spot

designed system to automate tedious work (scraping, formatting) while requiring human judgment at critical points (final review), with transparent scoring rationale.

core product insights

authenticity beats perfection: real personalization requires specific metrics ($50m arr, 27% improvement), named projects, and permission to be specific—not just data access.

context layers drive quality: each agent needs precisely scoped context: too little creates generic output, too much wastes resources. solution: on-demand narrative database.

structured failure modes: built explicit fallbacks, validation layers, and graceful degradation rather than preventing all failures—ensuring system resilience.

product evolution

phase 1

basic pipeline established core functionality but produced generic applications.

phase 2

added intelligence layer (enrichment, summarization) and persistence.

phase 3

narrative-driven approach achieved authentic, production-ready personalization.

future roadmap

• ml-based scoring from successful applications
• a/b testing framework for personalization strategies
• company culture matching algorithm
• network effects for referral identification

key takeaway

successful automation amplifies human judgment rather than replacing it. the system works because it treats ai as a capability multiplier, not a decision maker.

proof: pipeline diagram, before/after inbox screenshot, sample n=1–2 users