Every engineer says they want simple software. But then the real world shows up—with latency targets, disk space budgets, runtime limits, and “this can never fail” requirements—and simplicity starts to crumble. You didn’t make your code messy for fun; constraints did.
The Real Meaning of “Constraint” #
A constraint is just something measurable that your system has to stay within. Think of it like an invisible box around what your software’s allowed to do. A few common examples:
- Execution speed or latency
- Memory limits
- API request quotas
- Compile time
- Power consumption
- Data transferred per user or session
They’re often called “non-functional requirements,” which undersells how real they are. You don’t have to care about constraints early on, but once scale or quality matter, they become unavoidable—and every one of them shapes your code.
Even within a single constraint like performance, you have flavors: throughput vs latency, average-case vs worst-case, single-thread vs parallel. These subtle definitions completely change how you design things.
Complexity Isn’t Bad—It’s What You Pay #
Every time a constraint tightens, you buy some extra performance or reliability by spending simplicity.
Take sorting. You could use insertion sort—tiny, elegant, easy to understand. But when data grows, its O(n²) performance becomes painful. You “upgrade” to mergesort, faster but more complex (O(n log n)). And if you’re in Python-land, you’ll meet Timsort, a hybrid that switches strategies for small vs large lists. That juggling act buys speed, but it also costs attention, testing, and bug surface area.
Software complexity often looks like a tangle of optimizations, workarounds, or weird edge-case handling—but really, you’re just spending simplicity to meet constraints. That’s the trade: complexity for capability.
When Constraints Collide #
One constraint is tough. Two are brutal.
Having multiple constraints—say, high performance and low memory use—creates tension. Improving one usually harms the other. The classic space–time tradeoff is a perfect illustration: save space, lose speed; gain speed, burn space. Or databases: optimized read throughput kills write throughput. You can’t win—you can only balance.
Even when constraints don’t directly fight, every new one nests inside the layers of complexity you already built. Systems rarely get twice as hard—they get exponentially harder.
The Crack in Modularity #
Good structure helps, sure—but constraints have a way of breaking even the nicest abstractions. When you’re hunting micro-optimizations—a 1% gain here, 0.5% there—patterns like DRY or indirection start to feel heavy. A compiler chasing a 10% speed gain might connect everything directly, ignoring clean separation. You trade modularity for micro-efficiency.
Constraints also bring you closer to hardware reality. Suddenly, “keep related data close in memory” matters because CPU caches care about layout. Your architecture isn’t purely a software problem anymore—it’s physical.
Hard vs Soft Constraints #
There’s a big difference between a soft constraint that can bend and a hard one that can’t.
Soft constraints can be violated occasionally—it’s bad if emails go out 10 minutes late, but not catastrophic. Hard constraints, though, leave no slack. Your binary must fit under 256 KB on that microcontroller or it’s game over. Hard limits crush flexibility—and drive the most complexity, because you lose the freedom to trade anything away.
Be Careful What You Optimize #
Sometimes improving a constraint adds new complexity instead of solving it. Hillel Wayne shares a great example: his team sped up data warehouse jobs from a full day to seconds. That success led the company to start using it for live dashboards. Cue new workloads, new expectations, and a whole new world of complexity they hadn’t planned for.
That’s Jevons Paradox—make something more efficient, and people just find new ways to demand more from it.
So, Is Complexity Bad? #
Not always. Constraints are what keep software tethered to reality. When people say “simple is better,” they usually mean “simple until the world gets complicated.” The goal isn’t zero complexity—it’s intentional complexity. The kind that earns its place by satisfying real constraints instead of accidental ones.
Complex software with a reason behind every hard choice? That’s engineering.
Complex software with meaningless entropy? That’s technical debt.
The trick is knowing which kind you’re writing.