Chapter 2: The More Code Is More Value Pitfall

The finest code in the world is futile if it doesn’t solve an actual problem that users have.
    My product manager and I worked in lockstep. We had reached that rare state of collaboration where it felt effortless to solve one problem after another. As the engineering team lead at the time, I was elated with how the project was going.
    Our task had been to build a brand new user survey feature for our site. We were excited to build it from the ground up and felt it had great potential to help our sales team convert more leads than ever.
    Engineering-wise, features were actually being completed ahead of schedule. I’m proud to say that the code our team delivered was well-written and well-tested. We even managed to work on some nice-to-have stretch goals, too. Users were able to add lots of unique customizations, and our design team was pleased that we’d managed to include some fancy animations to polish the experience. We set out to build the best damn survey feature we could, and that’s what we did.
    Finally, we released the feature into the wild, eager for it to boost sales. Then we waited. And waited. And waited. It turns out that we built a fantastic feature... that nobody wanted or needed.

The Perils of Equating More Code with More Value

    As a software developer, it’s easy to consider writing more code as delivering more value. After all, if your job is to write code, surely more is better? Unfortunately, that’s not the case. In fact, less code is almost always better. A feature that is implemented in 10 lines of code is likely more readable and less error-prone than the same functionality written in 100 lines.
    The More Code is More Value Pitfall occurs when a developer thinks that delivering more features is the only way to add utility to an application. Its underlying implication goes beyond strictly counting the number of lines of code you’re writing and drills to the core of why software is written in the first place.
    Developers engaged in busywork aren’t adding substance to their product. Nor are developers who craft immaculate algorithms for an area of an application that is rarely used and solves a low-stakes problem.
    Real value is created when an application solves the needs of its users, not when it has more features. Judging an application by the number of features is like judging a restaurant by the number of items it has on its menu. Having 200 options may seem impressive to some at first glance, but the truth of the matter is that the restaurant will never be able to deliver excellent quality on all of them and will eventually spread itself too thin. This means that the restaurant’s operations suffer in the process—from supply restocking to wait times, and everything in between. Options sound good, but we really go to restaurants to choose from a handful of select menu items that are skillfully prepared. In the same way, users of an application only need a limited set of functionality but need it to work well.
    This can be a tough mindset to adopt in organizations where performance is measured by the number of features shipped and not the real problems being solved. Moreover, it’s challenging to come to a consensus on what constitutes value without some sort of framework in place. If the goal is to increase page views and there is an analytics framework that is already being used, figuring out if that goal has been met is incredibly straightforward. If the goal is to make a page “easier to use” then there is more discussion that needs to occur in order to figure out what categorically constitutes success.
    Truly appreciating this pitfall is difficult; experience is what solidifies its merit. For some people, this means being burned by writing lots of code that is ultimately unused. For others, it takes being on a team that embraces constructive product research habits (like measuring site usage and regularly performing customer interviews) to come to the realization that writing lots of code is not the same thing as delivering a usable feature.
    Consider this scenario: there is an application where users can download a report to view all of their activity on a website. For our purposes, assume this feature was thrown together hastily, resulting in a slow report that takes a full minute to download.
    The team responsible for this report notes that it’s not being used too frequently and makes reasonable assumptions as to why: perhaps the download is slow and/or there isn’t enough useful information on the report itself. Tasked with the objective of “improve reporting,” an eager developer dedicates a full two-week sprint to updating it. The developer approaches the problem with diligence—writing tests to ensure the code is robust, adding new fields to provide more information to the user, and even optimizing the queries to make the report download in a few seconds instead of a minute.
    It sounds like the developer has done everything right to make the report better than ever. The advantages of the new version are undeniable—the code is well-tested, there’s more information available, and the report is faster.
    The real issue has less to do with how the developer approached the problem and more to do with why they approached it. The team eagerly jumped to the conclusion that the report is undoubtedly needed in the first place when, in reality, they lack a clear problem to solve. “Improve reporting” isn’t a specific request in the same way that “report the customer churn rate to the sales team each month” is.
    Maybe the report is only accessed by a handful of users who only need it a couple of times a year. Maybe it’s pertinent to more users, and only the performance needs to be improved. Maybe even the people who download the report don’t wholeheartedly trust its figures, and the report itself could be removed entirely.
    Gathering more insight into why and how the report is used in the first place would help the team determine the appropriate approach to the problem. Having one developer spend one sprint on one report probably isn’t the end of the world, but consistently approaching problems in a naive manner will unquestionably lead a team astray in due time.
    The bottom line? Context matters, and there is an inherent opportunity cost with software development. Every new feature comes with the concession of forgoing other features that could have been built in that time. Just because a feature may provide some utility to some users doesn’t mean it was the right feature to build.

Helping Find Value

    The reporting example is contrived in that it conveniently assumes that the rationale is lacking for updating the report in the first place. That example isn’t to say that software developers should perpetually assume the responsibility of seeking out justification for every aspect of every feature being built. Rather, their role is to support the team overall by employing the following types of techniques:
    Help the product team gather insight into how an application is being used. This can come in the form of adding analytics to an application so that the team better understands its practical use. In other cases, it could mean advocating for and watching user interviews in order to acquire more insight into a problem. Either way, a developer doesn’t necessarily need to be the person executing these strategies, but they should at least be familiar enough with different options and their trade-offs such that they can help lend insight to a product team when researching solutions.
    Question when a problem may not be solving a real user pain point. It’s a perfectly reasonable ask to want to know more about why a team is solving a given problem. In fact, blindly spending time and effort on a feature that’s never going to be used is a disservice to a team. On a healthy team, this degree of discourse is actively encouraged. On an unhealthy team, questioning why something is being done should be approached with tact. In the world of software, just like in other industries, receptivity varies and often depends greatly on power dynamics.
    State the problem that a feature is solving for users. This keeps the focus on the pain point as opposed to a specific solution. A value statement could be as basic as a one-sentence bit of background on a ticket. You can solve problems in multiple ways, but only discussing features in terms of solutions is a form of tunnel vision when building a product.
    It’s natural to fall back into autopilot mode and assume that everything that makes its way into a feature request has some innate justification. The important thing is to ask the question when its answer isn’t immediately obvious, as the software development process isn’t cheap. There is a very real cost to this lifecycle: designing a feature, writing code, testing it, deploying it, and verifying it afterward all take time and effort.
    Make verification a part of the process. It’s all too common for teams to spend significant time upfront on a feature in terms of how it is constructed, only to release it into the ether and never think about it again (see The Done Upon Delivery Pitfall). Skipping the verification process may afford temporary comfort, but the underlying issue still needs to be addressed at some point.
    This strategy is well suited for features that address problems with experimental solutions. Suppose a team is attempting to reduce the number of support tickets for an application by a target percentage, and the customer support team frequently fields requests regarding confusion around different pricing plans. The engineering team may be tempted to push out a redesigned pricing page and then continue on to other tasks. If they never follow up and measure the support tickets related to pricing confusion, they don’t actually know if they’ve solved the problem they were setting out to solve in the first place.
    Test and follow up with your feature if you want to provide maximum value. Even attempting to measure the effectiveness of changes within a system is a significant first step. Teams should gather as much data as they need to answer the question “Did we solve the problem we set out to solve?” after features go out. Only then can issues be resolved with confidence.
    The complementary implication of this approach is arguably more insightful—-problems in which a team is less confident should be allotted less time until they can be further justified. Otherwise, you end up where we did with our user survey feature; we wasted lots of time building a feature that we weren’t confident in.

Closing the Loop

    Looking back on the user survey feature, one possible outcome would have been for me to point the finger at our product team and blame them for their lack of validation. But that’s not how software development works (or at least not how it should work).
    I knew as well as anyone that surveys were a large feature that we were building in isolation. It was wholly within my power to advocate for building a minimum viable product and gathering user feedback first. Likewise, part of my role in the development of the feature should have been discussing how we could measure and validate usage.
    It’s not bad to start going in the wrong direction when developing a solution. But it is bad to barrel toward an outcome without ever pausing and checking to make sure you haven’t lost your way.
Forging Your Path
  • When was the last time you wrote a big feature that ended up being discarded? What could you have done to help validate this ahead of time?
  • Is your team open to the idea that less code could result in a more useful application? If not, are there any examples of features you could provide that took a lot of effort but resulted in low usage?
  • What new practices could your team adopt to make value a guiding principle?

Thank you for reading the preview of this book!
Back to all chapters
Buy the book