I’m not telling you to use SOLID and TDD because I read some cool stuff on a blog somewhere. I’m telling you to use them because your code needs to have the following attributes:
- It must be easy to understand.
- It must be easy to maintain.
- It must not expose stakeholders (you, your employer, your clients, etc) to risk that you could’ve mitigated.
To the extent that your code lacks these properties, it will be risky to use, and unpleasant to maintain.
SOLID and TDD are the most reliable, direct ways to address this problem; and to make coding the fun, easy thing that it ought to be.
The story of why I strongly believe this reaches back over thirty years.
I began to program in 1986, when I was not quite eight years old. My brother taught me how to write a few lines of BASIC code:
> 10 PRINT "HELLO "
> 20 GOTO 10
> RUN
I had a book that taught by example. It was designed so that anyone, of almost any age, could get going. I moved on to statements like INPUT N$, IF, FOR X$, and so on. I kept at it. I wrote bigger and more complex programs.
Before long, I ran into the problem all programmers eventually face: Their programs become difficult to understand, and even more difficult to modify.
I did what everybody did. I let bad designs stand, and coded around them, because that seemed easier than going back and fixing my old code. (After all, it had taken all of my cognitive resources just to get the old code working! How could I be sure that I could duplicate that success if I started over?)
Top-Down Programming
At the time, I had this book about “Top-Down Programming.” This was supposed to be the best thing on Earth. It was going to make programming really easy and predictable. It was going to Fix Everything.
Well, it didn’t, and I haven’t heard anyone use the term “Top-Down Programming” since the late ’80s. If you look it up, there’s some concept by the same name out there. (I’m not sure whether it’s even the same idea.) If you don’t look it up, you are unlikely to come across it on your own.
This is because Top Down Programming didn’t Fix Everything.
OOP
Some years later, I learned about Object-Oriented Programming. This was another methodology that was going to Fix Everything. To be sure, I thought it was really cool. I still use OOP today, of course.
It’s cool, and it’s useful, but it doesn’t Fix Everything. It gives you the ability to take awful spaghetti code and wrap it in scopes. You gain the ability to shoot yourself in the foot with the aid of curly braces.
More Languages; Same Problems
I branched out to other languages. I went to college and took every programming class I could register for, even if I didn’t need to. I took X86 assembly. Let me tell you — after ten years of BASIC, that was really something! It was fast, and I had direct access to the CPU. After that, I learned C, C++, and Pascal. (I used inline assembly in those programming languages whenever I wanted to do fast graphics.) After that, I learned Java, and took a couple of classes that required ASP.Net and VisualBasic. (Eugh.) I also picked up C# on my own. That was pretty easy.
Most of these languages are object-oriented. I found that very useful; but even so, there was still something missing. OOP, like Top-Down Programming, turned out not to be the panacea it was claimed to be. My code, while functional, was still brittle and scary to maintain.
Web Programming
In college, I wrote desktop applications. You ran them on your machine, and they had nothing to do with the Internet.
In my spare time, I taught myself PHP and MySQL. It was the late ’90s. I was still in college, and thinking about my career. I figured they’d come in handy. (In fact, they did; I got hired as a web developer before I was anywhere close to graduation.)
It was cool to program websites. In those days, PHP was in version 3, and had only recently had an OO layer grafted over its many built-in functions. I had the same problems in that language that I did in all of the others.
MVC and MVP
MVC, which had been designed twenty years prior by Trygve Reenskaug to enable clean UI coding, had been co-opted by NeXT Computer. This was the way to design Web apps, they said. This was going to Fix Everything!
Well, guess what?
The way MVC is usually used in modern Web apps is not in keeping with the original design. Reenskaug specified one Model, one View, and one Controller
— a triplet — for each UI component. These triplets were supposed to exist in their own little bubble-worlds, and to talk to other MVC triplets using a message-passing system. (It was, after all, written in Smalltalk.)
MVC triplets were each supposed to solve a small, well-defined problem; and to cooperate with other MVC triplets that were in turn solving their own small, well-defined problems. They were not supposed to get directly involved in each others’ business.
Instead, the MVC implementation I found everywhere I looked resulted in everything being tightly coupled with everything else. You could easily have thirty or so Controllers interacting directly with twenty or so Models. Classes were doing other classes’ jobs. Methods were doing other methods’ jobs! If you changed something, you never knew what might happen. You could change one small detail, and it would break eight things in places you’d never guess. It could take weeks to blunder across all the damage.
The code was still hard to understand, and it was still hard to maintain!
I looked into MVP; and while it was slightly more sensible to me, it was just a minor variant of MVC. I used that for a little while, and ran into the same old problems.
Disappointingly, MVC and MVP bore the same signatures as Top-Down Programming and OOP: they were useful, but not panaceas. They ostensibly chipped away at the problem of code that was hard to understand and maintain. Still, it wasn’t enough. The problems I faced as a beginning BASIC programmer all the way back in 1986 were still with me.
I got into microcontroller development for awhile. I wrote software in C and C++. It was good to get back to them after so long. I learned about heuristics, and wrote code that did amazing things. However, this was not to be the core of my career; there is simply not enough of that kind of work to go around. I decided to get back to my Web development roots. That’s when everything changed for the better.
Design Patterns
This was another methodology that was going to Fix Everything.
It doesn’t. You can write things that conform to any Design Pattern you can name, in abject spaghetti, and encounter endless nightmares trying to maintain it.
Design Patterns can useful, but they are not a panacea. They are a toolbox. They are not a plan. I still use them today, and happily so, but I don’t try to shoehorn every last thing into them.
SOLID! Finally!
I applied to work at a company that requires all code to be written according to the SOLID Design Principles. It was part of the “entrance exam.” Before you could get a real interview, you had to submit a fairly complex program that demonstrated (among other things) SOLID. I’d heard of SOLID before, but never looked into it.
I took an old program I’d written in college that did most of what they wanted, and began to adapt it to their specifications. It would require recursive algorithms, heuristics. No problem, I thought. I got to work.
The program was coming together, but something was broken… you know… somewhere. It was almost working, but not quite. Something was off somewhere in the recursive logic. Debugging that sort of problem can be extremely tricky. Indeed, I tried many tricks, but to no avail.
Well, I was losing traction, and the deadline was looming. I decided to start refactoring it into SOLID-compliant code. (I had to anyway, right?)
That was when something completely magical happened.
The code started working. I’d made no attempt to fix the broken behavior; I was only refactoring it to comply with SOLID.
I had about four times as many classes as before. They were better-named than their predecessors had been. Their methods were also better-named, and had better-defined signatures and behavior.
I couldn’t help but notice that the code was now significantly easier to understand than most of what I’d written before, in any language I ever used, with anything approaching the level of complexity required by this application.
I began to see that programming could, in fact, be as fun and easy as I had always wished for. I began to see that there could be a panacea. My SOLID skills were not yet developed enough to get the full benefit. Still, I was way better off than I had ever been before.
I resolved that I would always use SOLID from that point on. I taught it to myself as thoroughly as possible. I memorized every concept well enough to explain it to myself in simple terms.
I’ve never regretted this decision. I didn’t really get it as well then as I do now, but that problem would soon see a solution.
Test-Driven Development! Finally!
This was another revelation.
Even with SOLID, I was still writing code that was too complex to understand, and a little scary to maintain. To be sure, it was far better than it had been before, but my understanding was still not fully formed. I could describe SOLID, but still wrote code that implemented it imperfectly. Test-driven development is what more or less forced me to tighten it down.
I think it’s very accurate to say that SOLID and TDD go hand-in-hand. Each plays to the others’ strengths. If you practice only one of these methodologies, I sincerely doubt that you will ever enjoy its full advantages.
I was slow to learn. TDD is a fairly difficult discipline to master (and I haven’t mastered it yet), but it’s worth it. I began to write code with tests that were really just integration tests. They were basically the automated version of using Postman or some similar tool to send a request, and examine the response. Nowhere near enough coverage, nowhere near enough tests and assertions. Still, the early results were encouraging; I could change something, and see in moments whether it broke something else. (Not everything was covered, so bugs could still slip by; but it was far better than not using tests at all.)
As I did more TDD, I found myself being not-so-gently pushed in the direction of writing smaller methods that were easier to test. This made me better at SOLID. My methods became much tighter and more succinct than they had ever been before. Cyclomatic complexity plummeted; and when that plummets, so does unpredictability. The code became easier to understand; and in so doing, it also became easier to maintain. This was it!
SOLID and TDD. They both claimed to Make Everything Better, and they both actually Make Everything Better.
For example, SOLID tells you about Single Responsibility. You can say to yourself, “This has only one reason to change,” about a method that’s fifteen lines long and has three if-blocks (two of which are nested). That method is hard to test! It’s a right pain in the coccyx!!! TDD is what finally wears you down, what makes you break that method up into smaller pieces.
Once you break that method up into several smaller methods, the code becomes easier to understand, easier to maintain, and easier to use in new ways. Each method is concerned only with its own business. The code is similar to its original form, but now it’s less brittle. It used to give you a reason to code around it, but it doesn’t anymore.
100% Code Coverage
Initially, I got up to about 90% code coverage. This percentage was a mix of my old integration-test-only work, and newer work that actually involved unit-testing every method as an individual piece of logic.
I knew about SOLID because of Robert C. Martin, and it had served me very well, so I wondered what his opinion might be on code coverage.
100%, he said. Every line and branch, he said.
I gave it a shot.
I found the same kinds of bugs in the last 10% that I’d found elsewhere. I began to realize that the 70 or 80 percent some people were touting (and feeling good about) were not adequate. Uncle Bob was right again!
Finishing that last 10% made me find bugs that would’ve eventually caused problems in production. It made me find design flaws and things that “accidentally worked,” but would’ve broken under perfectly normal, expected, acceptable use cases that I simply hadn’t written tests for yet.
Can you write awful tests to get to 100%? Sure you can!
You can also write awful tests to get to the 80% that you think is enough. The problem with awful tests is not how many of them you write, or how many lines they cover. It’s that they’re awful.
Therefore, my advice about code coverage — hard-won through many mistakes that you now don’t have to make — is to get to 100% code coverage with good tests.
As Uncle Bob will tell you, you’re never really done with your tests; however, there is no rational stopping point below 100%.
Don’t ship code that might work. Write tests for all of it.
Worried about meeting deadlines? Fine. You should also worry about making your customers test what you didn’t feel like testing. You should worry about shipping bad software. You should worry about whether you know if it’s bad or not. If you don’t have 100% coverage with good tests, then you can’t know.
You should worry about how much time you’re going to have to spend going back and fixing your design flaws.
What next?
I think I’ll find more methodologies that will reliably make programming easier and less risky. I look forward to learning about them.
If you are tired of getting trapped in the same old problems — code that’s too hard to understand and maintain — I would strongly encourage you to do likewise.
Right now, my biggest weakness (as I see it) is that while my TDD skills have been steadily improving, there is still that last bit of internal resistance to doing full TDD.
Oh, sure. I start by creating a new Class. I stub out fifteen or so methods. Just the method signatures and some braces, no actual code at all. Then, I tell my IDE to generate a test. I select all the methods, press the button, and it gives me blank tests for them. I write the first test, and code the first method. I step through it with the debugger while it’s being tested, make sure everything looks right, make sure the test is actually running, make sure that exceptions are actually getting thrown when they should, and so on.
After awhile, I’ll realize that I need to alter or add some code elsewhere, and I’ll just do it. I won’t write the test just then. I’ll do it later, you know. It’ll be fine. I’ll just run code coverage before I commit and get it then, right?
This is what Uncle Bob has to say about the matter:
- You are not allowed to write any production code unless it is to make a failing unit test pass.
- You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
- You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
At the moment, I’m meeting these requirements most of the time, but not all of the time. My current task, the thing I’m doing to improve my discipline, is to address this. It would be more comfortable to go on as before, but it wouldn’t be better to go on as before.
Further reading: Untested Code is Dark Matter