Ivan's Inferences #3
Coding While You Sleep:Lessons from Building an AI-Native Engineering Team
There’s a popular fear floating around engineering circles right now: AI coding agents are coming for our jobs. I don’t think that’s quite right.
Coding agents aren’t terrible, and they aren’t magic either. But engineers who learn how to work with them effectively will absolutely outpace those who don’t. And more importantly, working with agents can make the job far more fulfilling.
This post breaks down what we’ve learned so far: how we "code while we sleep," how AI changes engineering standards, where humans still matter deeply, and what this all means for the future of software teams.
We built an AI coding agent orchestrator
And we named it Ivan. It was born out of necessity. We’re a small team at an early stage startup, and hiring a large engineering org simply wasn’t an option. But we still wanted to ship fast.
We were already using tools like Claude Code and Codex day to day. The pain point wasn’t code generation—it was everything around it:
- Writing prompts
- Opening pull requests
- Requesting reviews
- Addressing review comments
So we asked a simple question: How do we code while we sleep?
Ivan lets us drop in a list of tasks at the end of the day. Overnight, it:
- Works through tasks sequentially using a coding agent
- Opens pull requests in draft mode
- Requests automated reviews (via Codex)
- Applies review feedback
In the morning, we wake up to a stack of PRs—like a team of junior engineers worked overnight. With just two engineers, we’ve roughly doubled our velocity.
Ivan is now free, open source, and available on GitHub.
PRs are no longer sacred
One of the biggest mindset shifts is this: pull requests are cheap now.
In the past, opening a PR felt momentous—hours or days of work bundled into one carefully reviewed artifact. Today, an agent can generate hundreds of lines of code overnight for the cost of some compute.
That changes how we treat PRs:
- Many get thrown away
- Draft PRs are not considered reviewable
- Closing a PR is no longer a failure
The real bottleneck isn’t code generation anymore—it’s human review.
Humans are still responsible for the code
Even with AI agents doing most of the typing, we hold a hard line:
You must understand every line of code that goes into production.
“There’s no excuse of ‘the model did it.’ If I merge it, it’s my code.”
If an agent produces something I don’t understand, I either rewrite it or make the model try again. Agents are powerful, but accountability doesn’t disappear.
This also raises interesting questions around compliance and separation of duties (especially with SOC 2). If a human only prompts the agent but doesn’t write the code, is that person allowed to approve it? The industry hasn’t fully answered this yet—but it’s becoming a very real problem as code output scales faster than reviews.
Rethinking code quality for AI-maintained systems
When software is primarily maintained by agents, some traditional standards start to blur.
We still care deeply about:
- Security
- Data correctness
- Performance
But certain "code smells" matter less than they used to:
- Long functions
- Messy but readable logic
- Verbose implementations
If a coding agent can reliably maintain and modify a file, we’re often more lenient. That said, we don’t allow clever or inscrutable code. Models tend to write simple code at scale, not clever code—and that’s usually a good thing.
Engineering becomes agent management
My job looks very different now.
I’m not just writing code—I’m:
- Acting as a project manager for agents
- Designing repo-specific instructions
- Writing markdown docs so agents understand the system
- Creating specialized agents (for example, one just for database migrations)
We even encode engineering philosophies directly into agents. For example: never write down migrations. Once instructed, the agent simply doesn’t do it.
A large portion of engineering effort now goes into optimizing how agents work with the codebase.
Why this isn’t offshoring
“Coding while you sleep” sounds a bit like offshoring—but it’s fundamentally different.
Offshore teams are expected to:
- Merge their own code
- Own features end-to-end
- Act as a cohesive unit
Agents don’t do that. They generate proposals. Humans still own the system.
For me, agent-generated PRs function more like a Pomodoro technique. I start my day with a set of concrete artifacts to evaluate, refine, or discard.
What this means for software engineering
I’m cautiously bearish.
Junior engineering roles—as we’ve traditionally defined them—are likely to shrink. The work they used to do simply isn’t necessary anymore. And historically, companies haven’t been great at training juniors for the next level.
At the same time:
- Engineering is more fun
- Problem-solving is more central
- Individuals can build more with less
With great power comes great responsibility. We’re still in the early, chaotic phase—something like the dot-com boom for AI. The hype will settle. The tools will mature. And the engineers who adapt will shape what comes next.
Final thought
AI won’t replace engineers.
But engineers who know how to work with AI—who treat agents as teammates, not magic—will absolutely redefine what small teams are capable of.
And that’s a future worth building.
If you would like to hear more about how we handle AI-centric engineering at Ariso, check out my recent discussion on the Forward Slash podcast here.
