We’ve come a long way when it comes to describing and managing functional requirements. From long lists of “the system shall…”, via massive use case documents, to executable acceptance tests and living documentation. In comparison, the way we commonly describe non-functional requirements seems like a relic of the past. Too often I encounter documents that are simply a list of every item in the ISO 9126 standard (or similar) with a short statement about the system. We all know how important non-functional requirements can be, but the way they are usually presented doesn’t communicate this very well. I think we can do better.
First of all, I can’t be the only one that finds the distinction between functional and non-functional requirements confusing. Many of the things commonly described as “non-functional” are in fact very functional. After all, all observable behaviour is a function of the system. Why do we distinguish them at all? I suspect it has to do with how they can be verified. The way a requirement can be verified has a lot of bearing on how it is written down and its role in the development process.
If we look at requirements from that perspective though, I think there are actually four categories that can be distinguished:
- Requirements that we typically describe as functional are characterized by being verifiable at run time in a basic test environment. They also relate to specific functionality, which means that during development they can be captured in a user story (with acceptance tests) and can be considered “done” at some point.
- There are also requirements that are verifiable at run time in a test environment but deal with cross-cutting concerns, so they can’t easily be captured in an individual user story. Some examples are authorization, logging, and accesibility. We should still be aim to describe and verify them with automated acceptance tests though.
- Next there are requirements that describe observable behaviour at run time, but are not easily verified in a test environment. Examples are performance, scalability and disaster recovery. Some companies have solved this problem by testing in production (think of A/B testing for usability or Netflix’ Chaos Monkey), but this at least requires specialized tools.
- The last category would be the true non-functionals. Those aspects that Wikipedia describes as “evolution qualities”. Not what the software does, but how it is made. These requirements can essentially only be verified by opening the black box and looking at the code (of course assisted by code quality metrics).
Since we already have good ways of describing the first two categories, my main issue is mostly with the latter two.
We are told that it’s important to separate the user need from the suggested solution, since we all know the trouble with people asking for faster horses. I would argue though, that with functional requirements there are many cases where the customer does actually know what she wants so the distinction becomes somewhat artificial. She asks for a feature – we build the feature – everybody goes home happy.
However, when it comes to requirements on how the system should be made the distinction is suddenly very obvious. In a typical customer-supplier scenario the customer is not expected to be the one dictating how the system should be made. Actually deciding and prescribing that already has a name: architecture. The customer’s role is simply to provide input to help make those decisions.
When customers merely take ISO 9126 and turn it into a list of requirements, things can go wrong in a couple of ways. For one, as they feel they have to write down something for every quality attribute, many statements tend to be too generic and boil down to: “the system should be well made”. Maybe I’m too optimistic about the state of industry, but the customer really shouldn’t need to be telling the supplier that software should be testable and maintainable. That’s like your hairdresser asking you how you want your hair done, and you replying “competently”.
On the other side there’s the pitfall of being too specific and requiring things that should really be up to the supplier. That would be like telling your hairdresser which scissors to use. I encounter requirements like “there should be no data duplication”, as if there are no legitimate reasons why duplicating data may be the best solution.
Using ISO 9126 as a check-list to make sure you didn’t forget anything is perfectly fine, but please don’t treat it as a form where you have to fill in every field. Before writing anything down you should always ask yourself: how does this support making a decision? How does it help the development team choose one design over another? How does it help the customer choose one supplier over another?
For example, when I read a list of non-functional requirements, the statement that every web page should load within two seconds doesn’t give me much information. Why is this interesting? Are we expected to handle thousands of requests a second? Are we dealing with massive amounts of data? That’s the kind of stuff that helps me make decisions.
How to improve them
So, recognizing that non-functional requirements often aren’t really non-functional, or aren’t really proper requirements, how can we improve them? I’m not advocating a particular process or notation. I just want to avoid a situation in which a lot of stuff is written down but we have to search for the parts that are actually interesting. Instead let’s just start by stating what we know that matters about the problem domain, and use that to guide our conversation about the best solution.
Here’s a couple of questions that can serve as conversation starters:
Are we dealing with sensitive data? Is the system of particular interest to attackers?
Realistically, what is the impact if the system goes down for a couple of hours every once in a while? Can we weigh the cost of that against the measures we need to take to avoid it?
Who are our users? What do they know? What skills do they have? What’s the environment like that they’ll be working in?
Efficiency / Performance
How many requests do we expect? Are there any particular peak times? Any huge file imports or exports we have to know about?
Maintainability / Portability
I’ll admit that this is one of the hardest problems in software design, but: what parts of the requirements and environment can we expect to change in the future (for example due to laws and regulations)? Which parts likely won’t change? Is this really just a temporary solution that doesn’t need to be maintained (I’m sure they exist out there)?