
Cost seems to be an inseparable part of our lives. We look at it when we buy stuff, pay taxes, purchase services. But it takes a bit of a backseat when it comes to computer science—especially during the academic years. It comes back once we join the workforce, though it often takes on an elusive form — at least in my IT experience. It doesn’t come naturally; it’s effectively forced on us through communication with the business side. To me, “cost” is so ill-defined that it’s often thrown around for emphasis rather than as something more or less concrete.
Even in personal life, cost is tricky. Let’s say a pen costs $1. Is that the real cost? Do we just buy it and move on? Well, here’s how I see it: the pen costs $1 + $0.13 (13% sales tax) = $1.13. Now factor in income tax—say, 30%—and the cost becomes $1.13 / 0.7 = $1.61 (rounded). So once we factor in both sales and income taxes, the cost jumps by about 61.4% from the sticker price. And that doesn’t even include the cost of earning that money in the first place. Do you drive to work? How long does it take? Do you pay for insurance? Is your car depreciating? If you run a business, you might write those expenses off—but if you’re an employee, you just eat the cost. By the time all’s said and done, you’re looking at a markup of over 61.4%—and that’s before environmental fees, dealer fees, or whatever else sneaks in at checkout. And finally: how much time does it actually take to make that money?
Now to the IT world. I remember a while back when agile cards (or T-shirts and such) were a popular tool to estimate work for a sprint. At some point, that idea took a dive—never to be seen again. Why? I believe it comes down to two things: inaccuracy and waste. You end up estimating in points that are hit or miss (most of the time, a miss). Then managers try to make sense of those points for that particular team and convert points to time. Eventually, they give up and just ask for time estimates instead. Next, all of that gets applied to project estimates, and finally, to the budget. Eventually, the whole exercise was written off as useless. Why waste time estimating when it doesn’t improve accuracy? It just ends up wasting time for no benefit—or in other words, increasing cost.
Next up: software craft. How do we reason about the cost of code? Do we think about execution cost? Development cost? What about refactoring, adding features, maintenance, or security upgrades and dependencies? Do we factor in code correctness? Recently I was part of an architectural decision-making process. We had two paths: develop a new feature using an old dependency or a new one. Let’s break them down.
Using a new dependency: faster, easier development, with future support. Risk: the new feature might not integrate easily with the legacy system.
Using the old dependency: easier integration. Risk: slower development, no support, potential security issues. And when the old system is retired, we’ll either have to rewrite everything or keep dragging around old dependency — bringing all the baggage with it.
Moreover, we know the old system is set to be retired within the next six months, while new feature delivery is targeted for the next twelve. Just from the timeline alone, it becomes painfully obvious that the risks of supporting the old system are already mitigated. From a development cost perspective, it makes far more sense to adopt the new dependency. Yet the issue keeps getting debated—because, well, “nobody gets fired for buying IBM.” The essential argument is that writing new feature code based on the old dependency will work “everywhere,” so the deadline will be met, and we’ll all be safe—no risk. But what about cost? It would take three times as long using the old dependency. What about future cost—what if we have to rewrite it later? How about support—dragging around an old, unsupported dependency isn’t free. And do we ever factor in security risks? How much will that cost? Yeah, at the beginning of the day, if we don’t consider cost—or worse, don’t communicate it to the business—we might feel “safe.” But the business can count. And usually, better than IT. So by the end of the day, cost questions will creep in—and by that point, no one will be safe.
The cost of software development is anything but trivial—it depends on a variety of factors. Maybe it’s a throwaway project. In that case, we can skip tests, write a mess of code, use bubble sort, and slap it all together just to get it running as quickly as possible. But should we do the same for a legacy project? What about a current production system? Do we write clean code so it pays off with the next feature set—or just duct-tape things together and leave it for someone else to debug at 3:00 a.m. on a Sunday during a production emergency? I believe a developer can make any choice—as long as it’s a conscious one, based on cost considerations. And to make that kind of decision, cost must be learned, understood, and applied—as part of both software education and everyday development.