
Specifications are your job
Are you an engineer? Then detailed specifications are your job. Not the product manager, not the designer, you.
It’s super common to hear engineers complaining about lack of detailed specifications, or lack of a clear scope on a project. The thing is this is their job as much as anyone else’s. The only person who understands the level of detail required to specify and scope works clearly enough to build accurate, working software is the engineer - it can’t be anybody else’s job but yours. So quit complaining, and get specifying!
Often what we are actually seeing here is a lack of tolerance for ambiguity, and this is something that can be coached.
What do you mean by ambiguity?
The irony of this question is only outdone by its contrivance. But here by ambiguity I mean an idea that is not well formed enough to be turned into working software. Every product, every feature, starts as an idea - usually just a single sentence.
What if we had Uber but for food?
Perhaps this is obvious, but if it’s so obvious then why do so many people (it’s not exclusive to engineers!) balk when they hear an idea? “This is crazy, what about this? What about that? When are we meant to deliver this by? Who’s going to drive? Who’s going to cook? Why didn’t you loop us in earlier!?” Well, dear colleague, because I hadn’t had the idea, yet!
Perhaps you are starting to see some people you know, perhaps yourself, represented in this barrage of questions. This is such a common reaction I’m sure you’ve seen it - and normally for ideas far less complex than this. This is what I mean by ambiguity, and this is what I mean by the lack of tolerance for it. Nobody has specified exactly what they mean by “Uber” or “food” or “we”, and that leads to panic and frustration, rather than a list of questions to answer and problems to solve.
An inability to tolerate ambiguity is expected at the junior levels. But as engineers (and other disciplines, too!) become more senior it’s expected they will be able to tolerate ambiguity. In fact, I believe a senior engineer, ideally, should be able to tolerate any level of ambiguity - they should be well practiced in the process of taking an idea to production by this point in their career, and that means taking something from ambiguity through to clarity, and then following through with build and publish, whatever that looks like in your organisation.
In the world of software, there are not often pre-defined paths within your organisation, within your codebase, that readily solve for new product ideas and features. It’s your job as the engineer to find them, enumerate them, negotiate and match them up with product and design, and produce a detailed specification and scope together.
As I said before, you can’t expect non-engineers to understand the level of detail required to specify software - and you shouldn’t, because that’s exactly what you are being paid good money to do.
So how do I learn how to do this?
This is something we should be teaching and coaching right from the junior levels.
My favourite way to teach how to do introduce this is using a kind of “test plan” to help build a specification. This approach is very detailed, and only really effective for very small features, something you’d be expecting juniors and early-mid developers to be doing - it’s too fine a tool for entire products or systems - but ideally, as I say, that’s when we’re teaching this stuff, anyway.
This approach actually teaches multiple things at once:
- how to approach testing, and QA in general
- how to “discover” and write detailed specifications
- how to visualise cost-of-ownership and its growth characteristics
- leads nicely into automated testing strategy
This idea is not unique, but here’s how we do it. We build a table (spreadsheet tools work perfectly for this) - down the Y-axis, each row represents a step; across the X-axis, each column represents a “test mode” - I’ll explain this in a second.
Test Cases
For each thing you expect this feature to do (or not do) we create a test case - embolden that row. That might say something like “Uploading a file without an account leads to signup”. We want to keep these instructions high level - a good rule of thumb is that anyone in the organisation should be able to read this plan and execute it; they should be free of technical details.
Underneath that row, we specify each step. There are three kinds of steps:
- Arrange - something that’s simply required to ensure the system is in the state you want it for testing
- Act - an action that represents the actual test itself
- Assert - a step that checks the result of the action, and determines if the test was successful or not
The astute among you have identified the familiar AAA pattern common in TDD / unit testing practice - this is where we get that nice path leading directly into automated testing once these test plans are mastered.
Test Modes
By “test mode” here we mean each of the ways we want to test a particular feature, to ensure it covers all our expected modes. For web, this usually looks like: desktop browser, Chrome Android, Safari iOS, dark mode, maybe it’s a list of responsive breakpoints, whatever distinct, orthogonal “modes” under which we expect the feature to operate.
This gives us the familiar “multiplicative” cost effect of adding features - the classic one through my career as an engineer was “dark mode”, which is apparently completely indispensable… This feature sounds to laypeople like a single new feature added to a list, but because it’s actually a mode, we know the cost to build and maintain is actually multiplicative, rather than additive.
An example
Ok, let’s put it all together and see what it looks like:
Step | Action Type | Opera | FF | FF Dark Mode |
---|---|---|---|---|
Uploading a file without an account leads to signup | ||||
Log out of any accounts, and proceed to the home page | Arrange | |||
Upload any valid jpeg file less than 100mb | Act | |||
Observe the signup dialog is shown | Assert | P | P |
What we can see here is that we have a very strange set of supported browsers - no, wait, that’s not the point.
What we can see here is a test case with three steps (many will have more, obviously) and we have 4 modes (likely more) and we’ve run this test case and indicated P
for “pass” and F
for “fail”. You can get more nuanced with the results as you get more competent, if you like, but pass/fail is where we should be starting.
Some of you have already noticed we didn’t test FF Dark Mode - this is intentional, we’ll come back to this shortly.
Now, let me explain how this teaches all the things I mentioned above.
How to approach testing, and QA in general
While writing, reading and executing this test plan, we’re very likely to spot cases we didn’t consider. “What if the user uploads a monster jpeg? Or an invalid jpeg? Or tries to upload a movie?” These are all brand new test cases we could create - but more than likely you want to bring these back to your product manager & designer and talk them through. It may be acceptable to ignore these situations and launch anyway, for now! Perhaps you just want to add some simple error messages.
Whatever the case, this is the detailed specification that may not be obvious to product & design before you get in and “discover” these cases. Hopefully these particular ones are contrived examples, and you probably spotted these up-front, but trust me you will find strange edge cases this way that nobody has yet thought about until you got in there and had a detailed look.
This is all your jobs, of course, but as the engineer you will find things that others won’t - for example, what about jpeg metadata, are we stripping that? Is that potentially PII?
It may not be your job to solve all these things on your own, but it’s certainly your job to help get in and find them. No kicking back and expecting a “detailed specification” with a “fully signed off design” ready for you to tap keys and then blame somebody else when there are edge cases that weren’t spotted.
How to “discover” and write detailed specifications
There’s a few things to point out here. First of all, we’ve created a checklist. Each of the cells under each of the “test modes” that align with an “assert” step represents some expectation we have of the final product. This is first-principles QA - making a list and checking it twice. I was surprised how enlightening this method was when I was taught it (as a “senior” engineer, already!) - and I’ve been consistently surprised how this turns a lightbulb on for so many engineers when they are first exposed to the idea.
Secondly, we can clearly see that Safari iOS was not tested. This is the second first principle of QA (is that confusing?) - you don’t need to check everything. The goal of QA, and testing in general, is to help quantify risk - to know something about its size and shape. This way, business can make an informed decision together about whether to launch - risk aware, not risk averse!
In this example we can tell our team “ok, so it’s working on Opera and Firefox - haven’t checked Safari but we expect it to be OK if it works in the others; dark mode failed, but this is only available to signed up users at the moment, so won’t affect any of the users interacting with this flow”. Excellent. Sounds like it’s time to copy & paste that code right into production!
How to visualise maintenance cost and its growth characteristics
As I said before, this spreadsheet style clearly shows the multiplicative effect of “modal” features, eg. dark mode, supported devices, languages, etc. To most laypeople every feature looks the same, it’s a dot point in a one-dimensional list - but this technique clearly shows not all features are made equal, and these ones that have a “modal” effect end up multiplying all existing and future build and maintenance costs - clearly evident in the number of cells that could potentially now be marked “fail” at some point in the future.
This is a really, really effective way for junior engineers who have limited real-world experience with long-term maintenance to understand, visualise and develop good instincts around estimating long-term cost of ownership of features. It might seem like a great idea to add “dark mode” support but if you’re a startup still on a runway with just a handful of engineers, do you really want to multiply all your costs in such a way? What’s the value that dark mode is bringing in, for all its multiplicative penalties, as opposed to adding new “test cases” i.e. features that your users really care about and will pay for.
Leads nicely into automated testing strategy
We used the Arrange-Act-Assert pattern for a reason. When engineers have done a few of these “test plans” they’ll start to complain about how slow they are, and how they have to duplicate it every time to run a new test. When they come to you with this complaint, you get to offer them the perfect solution - automated testing.
Depending on the nature of your work, this could look like unit tests, integration, or full UI-automation E2E style tests. Whatever the case, this is the perfect way to teach them and take them on this journey, because they have all the tests already written - so the hard part is done, now it’s just about how to select appropriate testing strategies for each test case and convert them into code.
This method really helps engineers understand, from first-principle perspective, what automated testing is about. Very often junior (and even mid) engineers get confused by the different types of tests, the different types of layers - it’s because they don’t have the instincts yet to intentionally test.
But when they have a spreadsheet they built, and they understand, the intent is theirs and it’s crystal clear - so translating this into automated tests helps then connect these abstract ideas about automated tests with real, tangible, intentional things they want to check are working and protect from regression. It’s crazy how much easier it is to teach automated testing when they’ve done these manual test plans first.
Wait, weren’t we talking about specifications? How did we end up at testing?
That’s the best bit they’re the same thing. What is a system specification? It’s a list of expectations of that system. What is a test plan? It’s a list of expectations that you check against a system.
So, if you are a senior engineer constantly complaining about lack of detailed specifications, or lack of scope, etc. instead of getting in and helping build and shape that then you should reconsider your role within the team. Specifications are your job, as much as (if not more than) anybody else’s. In most cases, you are going to be expected to verify this thing is working, and be held accountable to keep it working - so it’s in your interest to figure out how you’re going to test it, both in advance of release and into the future to ensure it’s still working. And if you’re doing this, then because they are the same thing, you’re doing the specifications, as well.
If you are a junior or mid-level engineer, then you should be practicing this stuff and making it second nature. As you get into the higher levels of engineering, and you’re dealing with bigger, more complex things, this level of detail of test cases may not always be useful. But the instincts and intuitions these techniques develop will serve you at all levels forever - it always looks something like this, it’s just the level of abstraction that changes.