You're Doing Interfaces Wrong
Interfaces are everywhere, but still we have tight coupling, and inflexibility. Dependency inversion is the answer.
Software projects run over-budget, over-time, are inefficient, unmaintainable or ineffective. This is The Software "crisis".
In Germany, October 1968 the NATO Science Committee held a conference. This was the first international conference on software engineering (a then controversial term). Some 52 experts from around the globe were brought together for four days to discuss a growing and uncomfortable problem: nothing ever seemed to go according to plan.
In 1972, Edsger W. Dijkstra (who had attended the conference) described this as a "Software Crisis". He outlined how explosive growth of computing power and novel languages like ALGOL and FORTRAN created opportunities the newly forged programming community were ill-equipped to exploit. Despite best efforts:
As of writing, some 45 years later, these issues are still just as relevant and familiar, though perhaps today we would term these as a "Software Normality". While there have been improvements in all aspects of The Software Crisis, the problem is far from solved, and maybe will never be solved.
Here are some recent projects that demonstrate The Software Crisis in action:
Projects are over-budget because quality and deadlines are easier to achieve when given more money, whilst staunchly enforcing a budget could produce a product that is not fit for use or not delivered before a critical deadline, hence wasting the capital. Understanding why projects are incorrect and/or over-time (developer time usually being the largest cost in development) is key to understanding why projects are over-budget.
When a developer estimates the time to complete a feature, it is common practice to double or triple their estimate. This pessimistic and pride-offending practice is proven necessary time and time again. Developers consistently encounter problems that reveal themselves during development but aren't obvious from the outset. Logic would suggest that the developer would factor this phenomenon into their estimates as they mature, but this doesn't generally happen.
An experiment run in 2011 by Chance et. al. may hold the answer to why even experienced programmers quote poorly. In the experiment:
Unsurprisingly, the participants received high marks.
The same participants were given a new test without the answers and asked to estimate their accuracy afterwards.
They were appallingly optimistic compared to the control group. They focused on the good memories of doing well, and failed to heed the bad memories of correcting their own answers. Or to put it another way, they ignored the hard-evidence of their poor performance from their own marking and corrections, in favour of believing in their own intelligence. The self-deception is powerful, immediate and, further tests concluded, long lasting.
Programmers rate their future performance are under the same influences. The successes of the past cloud objectivity when assessing complexity and ability. As more time passes and a developer become more experienced and validated, the ego of the developer becomes more of an obstacle to estimating accurately. In addition, an experienced developer is more likely to be involved in making decisions around planning and budgeting.
When a child of 3 years is given box of smarties, opens it and find it contains pencils, they're naturally disappointed. But fascinatingly, if they're asked to imagine what some other child might think is in the box, they will say "pencils". Even when asked what they themselves thought was in the box before it was opened, they will still say "pencils". Children so young assume that what they know now, they always knew, and everyone else already knows. ("Theory of mind - Smarties task and Sally-Anne Task")
As adults, experienced developers have a more full "theory of mind", but we still tend to use our own mental processes as the template by which we predict other's behaviour, and even neglect to consider the lack of knowledge in others for sufficiently complex tasks (Keysar, et.al 2003). An experienced developer may lean towards the assumption that a less experienced developer could create something in the way they would, or in the time they would, or would make as many mistakes as they would. This is disastrous to allocating time and budget.
Even worse developers even paradoxically overestimate their own abilities whilst assuming less experienced developers have similar abilities to themselves. A study specifically into Software Developers interviewed two software companies and found that 32% of developers at one, and 42% at the other rated themselves as in the top 5% of developers at the company (https://www.youtube.com/watch?v=pOLmD_WVY-E).
Projects fail to meet requirements in two ways due to errors in two key areas:
Software development is hard. Software made for businesses must suit their needs and solve problems in unique ways, but developers start with little knowledge about the business.
There are two sides to this issue - customers and software engineers. Customers may not know exactly what they need, though they know what they want. This can be attributed to a few causes:
On the other side, software engineers can also be responsible for "fudging requirements":
For the last point, personal preferences of engineers can also apply to the point of irrationality. A famous example of this is the "GOTO" statement, which while identified as "harmful" as far back as 1968 (Dijkstra), was in such widespread use that it was not until the early 90s that GOTO started to really fall out of favour, culminating in it's removal from FORTRAN in 1995.
Aside from the actual requirements gathering, actual programming ability is important, yet many companies hire developers who are self-taught or only have basic certificates.
Difficulties in creating software also go way beyond programming ability. Medium to large projects also require discipline and structure to ensure:
These are all areas that even experienced and certified programmers can find difficult to uphold. For a project to be successful, programmers have to simultaneously achieve all the above objectives, which are often in conflict. Some examples include:
Then, of course there are also just normal human mistakes.
🗨️
Programmers call their errors “bugs” to preserve their sanity; that number of “mistakes” would not be psychologically acceptable. - M. E. Hopkins, Researcher at IBM, 1969
The culture and support around developers is also critical - optimal developer culture is a balancing act. Without a tool to track known issues, issues will be forgotten. But a complex or slow tool to track issues that no one wants to use will also produce a similar effect. Without a certain amount of accountability, cost to reputation or personal investment in a project, programmers won't be diligent checking for mistakes. Too much accountability leads to developer paralysis. Accurately determining who caused a problem is also a time consuming and potentially demoralising exercise. Without it, fewer lessons can be learned from failure and the risk of repeated mistakes is unmaintained.
Developer culture extends into the code itself. Creating consistent, well-designed, unit-testable, accurate and stable code takes time, and time is often short. In teams where senior developers oversee junior developers, the skill gap between different programmers can mean that achieving each objective simultaneously to the upmost standard is even more untenable. An acceptable code standard must be agreed upon, and must take into account the relative skills of team members as well as budgets.
Can you stop a tsunami if you see it coming? In order to manage a project, a manager must be aware of its state and correct as needed. Project management tools excel at seeing problems coming, but only have limited influence to prevent them. At the heart of it, if we can solve The Software Crisis and identify why projects fail, we can give project managers better tools to handle problems.
Currently, Project Managers have these limited controls over projects:
Each of these aspects may provide more effective utility of time, but it doesn't provide more time. Even re-designing scope just means the project manager is prioritizing the must-haves over the nice-to-haves by removing the optional objectives that are no longer achievable.
Project Managers can sometimes control the below, but often they are not privileged to do so:
Project Managers cannot stop a tsunami. Example tsunami's include:
People, outside of the Software Engineering field commonly believe testers verify software is free of bugs.
People who are testers commonly believe their role is to verify the minimum acceptable criteria for software.
This leads to a disconnect of expectations placed on testers. Testers want clear acceptable criteria for software from customers. Customers want bug-free code - what could be clearer? But testing for bug-free code is an impossible exercise. As far back as 1936, Turing proved it is impossible to create a computer algorithm to detect problems with algorithmic mistakes leading to infinite loops. We currently don't know if it is impossible to create an algorithm to detect long loops that itself is not too long - and this is just only one category of bug that could be present in a system - there are many more errors in system which are not detectable except under certain conditions, or at certain times, and identifying all of these might require testing the system in literally every conceivable combination of possible actions and states. This is impractical on simple systems, and impossible on large stateful systems.
Testing is also underappreciated in the cost of development; there is often time given to developers to correct unintended consequences, but often testers are pushed to pass code quickly to make up for that lost time. This naturally leads to more undetected issues.
Finally, issues are encountered later in the project, the more the project is realised. Flaws may be introduced very early on in the project, but it may not be apparent that there is a problem until much later in the development cycle. If the flaw is a requirement miscommunication, it may not be encountered until delivery. This limits the ability to recover and adjust, and may have a wide-spread impact on the rest of the body of work.
If we want to fix the Software Crisis, we need to tackle the many multifaceted and complex issues at it's core. If the Crisis can be solved, we need not one, but several magic bullets:
We have in fact made substantial progress since the 60s in all these areas, though we haven't truly "solved" these issues. We now have software methodologies which offer various improvements to aid manage projects, our education for engineers generally includes requirements gathering training, and automated unit tests identify programmer mistakes, at least some of the time.
Computers have been essential for helping alleviate The Software Crisis so far. We already rely on compilers to find syntax errors, auto-coders to provide better alternatives, to schedule and keep track of tasks/tickets, merge code from many sources, deploy and so on. It is not so unexpected then, that the recent explosion in AI capabilities offers further potential solutions. AI leveraged tools could:
On that last point though - we may still see AI assisted tools be stubbornly refused until the next generation of programmers come in - much as programmers of old refused to give up their precious GOTO statements despite clear disadvantages!
In the absence of AI tools ready today, all I can suggest for avoiding the worst of the crisis is to consider the above dangers in all your planing and management and budget accordingly. There is good reason why experienced and talented Project Managers, developers and testers are worth so much to avoid the myriad of issues. It would be nice to have this article solve all those problems, but 60 years on - our entire industry has only managed to stop drowning and start treading water; we are yet to out-swim the Tsunami.
cover image: http://homepages.cs.ncl.ac.uk/brian.randell/NATO/N1968/GROUP7.html