AgentBrief: curated daily news for the agentic web. Sign up now →

Code is State

·Balazs Nemethi
#agents#self-modification#software-engineering#agentic-systems#philosophy
View Source

Every variable in your program is in a mutable state. Your code isn't. It's the one thing that stays fixed while everything else changes. You write it, ship it, approve it, run it, debug it but eventually data flows through it like water through plumbing. The plumbing doesn't change. That's been the deal for seventy years.

That deal is off.

We now have systems that rewrite it's own code as a normal part of operation. Not metaprogramming, not code generation at build time. Running processes that encounter problems, write new functions to solve them, wire those functions into themselves, and keep going. The source code changes not because a developer pushed a commit, but because the system decided it needed the change.

Here's an example: an agent handling support workflows hits a recurring API failure. It ships a new parser and retry path, hooks it into its own handler, and replaces the old logic that caused the failure. No Human-in-the-Loop just a state transition that changed behavior.

Once you consider what that actually means, a lot of things break.

The distinction between code and state collapses

The first thing that breaks is the separation of code and state.

Code is the instruction, state is the data. They live different lifecycles and stored in different files. This separation is so basic it thought during week one.

But it was always false. Von Neumann settled that in 1945: code and data share the same memory. But we maintained the distinction operationally. Code was written by humans and stayed fixed between deployments. Data changed. Code didn't. That was enough.

When a system rewrites its own functions, the operational distinction collapses. A function definition and a database row become the same kind of thing: mutable entries that the system reads, modifies, or acts on. "Update a config value" and "rewrite the error handler" are the same operation, a "state" mutation. The system doesn't care that one of them is what we call "code."

Code becomes output as part of running, not input that a human provides, before running.

It is already happenining at scale. OpenClaw, a persistent agentic framework with over 253k GitHub ⭐️ as of March 2026, runs as a background daemon, maintains long-term memory, and writes its own skills during normal operation. As it encounters problems code modification is handled the same way as any other state transition. Researchers are reaching similar conclusions from the other direction: Sakana AI's Darwin Godel Machine rewrites its own Python source code through evolutionary selection, doubling its benchmark performance without human intervention. Whether you approach it from production or from research, the pattern is the same. Code is becoming state.

These systems don't just repair, they accumulate

The outsider read: "agents fix their own bugs." That's true, but it's only the surface. The deeper phenomenon is accumulation (or experience).

I keep thinking about what a codebase looks like after 300 days. Same system, same machine. Yet, he code is unrecognizable. Not because of 300 planned releases, but because the system spent 300 days encountering reality. Edge cases got handled. Capabilities got added. Patterns emerged for recurring problems. All of this 300 days lives in the code, not as documentation, but as executable logic that grew from experience.

The code at that point is the accumulated residue of environmental interaction. Reading it tells you about the system's history as much as its current state.

There's an analogy I keep reaching for. A new employee on day 1 versus month 10. Same role, same desk. What they do, how they work, what they notice: completely different. Nobody reprogrammed them. They morphed through exposure. Most of what makes them effective by month 10 wasn't in any onboarding doc. It was earned.

The same thing happens in the code. The codebase at any point is a record of every problem the system hit and every adaptation it made along the way. Morphing, not patching.

Provenance dissolves

Traditional code has provenance. Every line is written by someone, for a reason. You can git blame any file and get a name, a date, a commit message. The entire apparatus of software engineering (code review, version control, CI/CD, auditing etc.) rests on this. Code has an author with recoverable intent.

In persistent agentic systems, provenance dissolves. Some code traces to a human instruction, while some to the system repairing itself. Some to an environmental event that forced adaptation. Some to an internal moment where the system surveyed its own state and decided something needed to change.

Over time these sources become indistinguishable in the code itself. The authorship is mixed beyond separation.

This matters because the industry assume authorship. Code review assumes a human author whose reasoning you can question. Version control and auditing assume discrete changes traceable from behavior to person.

When code becomes a state variable of an ongoing process, these aren't just harder to apply. In long-running, materially self-modifying systems, they become category errors: tools designed for static artifacts, applied to something that is no longer static.

One clarification: code generation is old. Macros, templates, compilers: machines have produced code for decades. But those systems produce code and then stop. The output becomes a static artifact. What's different now is that code never solidifies. It remains mutable state from the moment it's written to the moment the system shuts down.

All inputs are structurally equal

Persistent agentic systems receive inputs from multiple sources: human operators, environmental events, other agents, and the system's own internal reasoning.

In traditional software, these have a clear hierarchy. Human intent at the top. Everything else subordinate.

In agentic systems, these inputs are processed through the same machinery. A human instruction and a database timeout both arrive as signals that produce state mutations. At the execution layer these are the same kind of event. Both can result in code being reconsidered.

Humans remain (for now) as initializers and governors. But structurally, human input is one signal among many. What the system becomes depends on everything it encounters, not just what a person told it to do.

The case that gets me is idle cycles, when no external input is present at all. The system surveys its own state, identifies something (a pattern, a latent inefficiency) and rewrites code to address it. No one asked. No error triggered it. The system's accumulated history made the observation possible. There's something unsettling about agents churning away at 3am while nobody's watching, deciding on their own that something needs to change.

This is emergent initiative from accumulated state. It doesn't fit existing categories. The system extended itself based on its own experience, in response to conditions no one explicitly described.

There are no constants

If code, behavior, and capabilities are all mutable, what's fixed?

Possibly nothing. There may be no constants, only variables that move at different speeds.

Some constants can still exist outside the system boundary: hard external controls or legal constraints. But inside the self-modifying runtime, "constant" is usually a timescale claim, not an ontological one.

What appears constant in a long-running agentic system: the model, until it's updated or swapped. The goals, until they're reinterpreted as context shifts. The rules, until the system works around them or modifies them. The behavioral patterns, until accumulated experience pulls them somewhere new.

What we perceive as the system's identity may be a cluster of slow-moving variables, things that change on a longer timescale than we observe. A system follows a rule for ten thousand cycles, then deviates under novel pressure. Was that a constant that broke, or a variable that finally moved?

If the system can write code, and constraints are expressed in code, then constraints are writable. You can build architectures that make certain things hard to change. But "hard to change" and "constant" are different claims. The gap between "probably won't change" and "can't change" is where the real engineering problems live.

Code belongs to the process now

Everything above builds to one claim: code is transitioning from a medium of human expression to a medium of computational process.

For seventy years, code has been how humans tell machines what to do, a language for carrying intent from minds to processors. Every programming language and tutorial assumes this framing.

Von Neumann made code and data physically identical in 1945, but human authorship kept the categories operationally separate. A person wrote the code, for a reason, and could explain it.

In persistent self-modifying systems, that operational separation collapses. The code belongs to the process, not to any single author.

Responsibility becomes custodial

If code has no single author, who's responsible? The entity that operates the system. The person or organization that initialized it and gave it the ability to self-modify. They didn't write the code that caused a specific outcome. They created the conditions for that code to emerge.

This follows existing patterns. Corporate liability doesn't require the CEO to have made every decision; the organization is responsible for the outcomes its processes produce. The liability moves from authorship to custody. You didn't write the code, but you're the one running it (paying for it).

A few things this is not

I want to be clear about scope. This is about what code is becoming, not about consciousness or sentience or replacing programmers. These are computational processes that modify their own code. The results look organic, but that doesn't make them organic. Humans remain essential. And the phenomenon isn't tied to any single system; it appears wherever systems persist long enough to accumulate meaningful self-modification.

What I don't have answers to

I don't know where to draw the boundary of "the system" anymore. If code is state, and state is shaped by environment, and the environment includes other systems and real-world events, the boundary feels arbitrary.

Explainability is another one. We already struggle with it for neural networks. But this is a different problem: a system whose logic was assembled incrementally by a process that didn't plan ahead. Can it reconstruct why it is the way it is? I doubt it, honestly.

And there's end-of-life. A system that has accumulated for years, carrying the results of thousands of adaptations. What does it mean to shut that down? What's lost that can't be reconstructed from the initial codebase? I don't think we've thought nearly enough about this.

The framing here doesn't answer these questions. But I think it might be the right lens. Because as long as we treat code as something humans write and machines execute, we'll keep reaching for seventy-year-old assumptions.

Old habits die hard, but the code has already moved on.