Chesterton’s fence is a useful principle in software engineering (and in life): do not remove a fence until you know why it was put there in the first place. For software engineers, that means a behaviour or a line of code probably exists for a reason; remove it and you may trigger trouble.
Over time, I drilled this principle into my head. I assume that my colleagues are smart and write sensible code. I avoid changing code I do not fully understand. That habit has spared me production incidents. But now, AI-generated code is invalidating this lesson.
I was recently reading code like this:
import redis
import time
r = redis.Redis.from_url("redis://localhost:6379/0")
def get_foo():
return _with_retry(lambda: r.get("foo"))
def _with_retry(
fn,
*,
retries=5,
base_delay=0.05,
exc=(redis.ConnectionError,),
):
for i in range(retries):
try:
return fn()
except exc as e:
if i == retries - 1:
raise
time.sleep(base_delay * (2 ** i))
In the past I might have assumed that the backoff was added after a bug or an incident. That is no longer a safe assumption. In this case, an LLM wrote the code and over-engineered a simple call.
Rather than working with smart and sensible coworkers, we’re now working with smart and sensible human coworkers — and thousands of very dumb, over-eager AI coworkers who have a tendency to lie. A given line of code might be crucial to our system’s reliability, or it might be unnecessary bunk.
So what’s the path forward? I want to say that it’s on us humans to enforce a high quality bar and keep AI slop out of our codebases. This is wishful thinking — many teams are pushed to favor speed and volume over craft.
Instead, tests matter even more. They are the spec that encodes how the system should behave. We should assume that a behavior without tests is unnecessary and can be safely removed. Put more care into tests, and you buy the freedom to set your dumb but productive AI coworkers loose on the rest of your codebase.