Three Things That Changed in My Workflow After AI Agents Took Over Writing Code

March 2, 2026 Rens Jaspers LLM AI Productivity Reflections Workflow

The way I work is changing rapidly because of recent advances in LLM agent capabilities. Here are the most important changes I noticed over the past week:

1. My Brain No Longer Gets Tired From Coding, and Multitasking Is Fine

Deep focus used to be very important. Programming used to drain a lot of mental energy.

With agentic coding it's different. I still review all the code, but it no longer feels tiring. It is easier to follow a path created by an agent than to design everything from scratch.

Because of this, switching tasks is not as mentally heavy as it used to be. This allows me to work on many different tasks at once and avoid having to wait for agents to finish.

2. I Code in Markdown Now

Developers are shifting from sharing programming code to sharing Markdown files that contain AI agent instructions.

You can find entire frameworks for this. The latest techniques include making agents spawn multiple subagents, each with their own specialty, and have them perform tasks in parallel or collaborate by letting them talk to each other.

I don't feel comfortable using other developers’ prompts. There are security risks, and I prefer to have full control.

That is why I started writing my own subagent profiles, skills, and orchestration workflows. Markdown is becoming my new main programming language.

It still feels like coding, but lighter on mental energy. A well-performing agent workflow is just as satisfying as well-performing code.

I also get the same sense of power as with traditional coding. Instead of the power to make a machine run a loop a million times, I now have the power to design entire teams of agents that will all work for me.

I wonder whether Markdown files will eventually replace traditional source code. Agent instructions could become the primary source, while programming code is generated output that is gitignored, similar to how build artifacts are treated today.

3. I Constantly Think About Token Costs

Keeping inference costs under control is a challenge. If I'm not careful and use Opus 4.6 for everything, it will be very expensive very fast.

Not all jobs require the same level of intelligence. The less intelligent models are usually a lot cheaper and faster. It's important to pick wisely if you don't want to overspend.

Luckily Cursor and Claude make this easier: they let you define subagent profiles. You can specify specialists and assign them a model to use.

Each of the subagents in my virtual team uses a model appropriate for its job. My Planner uses Opus 4.6 because it needs strong reasoning. My Implementer uses GPT-5.3 Codex: capable and predictable, but less expensive than Opus. The agents that gather context, review code, or write tests are fine using cheaper models.

Here are some examples: https://github.com/rensjaspers/agents/tree/main/agents