Willmott argued that we now have the technological building blocks to pretty much build anything we can imagine. With enterprise-grade containers, APIs, and a global cloud architecture, Willmott said we now have “the most powerful software and building blocks ever invented.” But our capacity to create new technological solutions leads to four key challenges, Willmott says: security concerns, unexpected behaviors from what we create, technological deficiencies and negative social impacts.
Willmott suggests as technology creators, “we need to stop and think.” He proposes an approach to software design that draws on Simon Sinek’s Why? Model. Where understanding why we want to build something can help us better articulate how and what we build, and help use ensure that our goals are focused on an aspiration that aligns with overall society: improving life or our goods and services, fostering social or economic change, or better managing the resources we consume.
Once software creators have articulated their “why” Willmott says there are five key principles of software ethics that can help them decide how to build:
1. Continuous Improvement
The first principle of software ethics requires a commitment to not shipping bad code. This means having the foresight to imagine how end users will use code in such a way that software creators will not need top create breaking changes in later versions. This ethical principle also highlights the importance of testing everything. One approach is to identify the customer journey of software users, map user stories and create unit tests that will check against these user stories as each feature or components is built out.
Willmott pointed to APIs from Stripe and Google as good examples of this principle. When registering for an API key with Stripe and Google, the providers note the version of the API the developer is using so that when developers have problems or are logged in and reading documentation, they are referred to error messages or web pages that reflect the version of the API they are using.
2. Graceful Degradation
Willmott says that as an industry, being able to help developers understand errors. He says an important principle of software ethics is to return less data rather than none and to use adaptive interfaces like GraphQL that can provide indications of what data and functionality are available.
Willmott points to Netflix’ Chaos Monkey approach which randomly introduces server failure into a system so that developers are creating more robust, resilient solutions that are flexible and able to withstand sudden shocks, and iRobots use of subsumption architecture as examples of the graceful degradation principle.
3. Radical Distribution
This software principle is a mainstay of today’s distributed, at scale application architecture, but Willmott argued that software creators should not just distribute data centers, but teams and resources as well. “As soon as you have enough capital, distributing data centers is something definitely worth doing,” Willmott said.
When distributing software tasks across teams, Willmott suggests creating the software product as a platform of microservices, connected via APIs, so that each individual team can work separately on their functionality independent of other teams. This requires common governance rules for all teams, Willmott adds.
He points to Fitbit as one example of radical distribution. Given the largest number of Fitbit users, cloudbursting to scale up capacity would be the traditional method of managing an upgrade — which would typically see five to six times peak data transfer occurring at the same time to upgrade all customer’s hardware and software at the same time. Instead, Fitbit has instead introduced random rollout code that evens out upgrade syncs with the backend so that globally, these are only done in batches to reduce overloading servers at the one time.
4. Components as well as Solutions
The fourth principle of software ethics comes from a microservices approach, in which the software creator ships components not just solutions. Willmott argues that when a tech company ships a completed software product, they “take a fair slice of added value out of the system.” By shipping components, software creators can co-create with customers. “If you ship components, everyone is more fundamentally involved in the value chain, whereas tech that turns users into pure consumers leeches out value from the market.”
5. Fearless Competence
Finally, Willmott adds that fearless competence should also be considered a software ethical principle. He asks if software creators would be comfortable demoing their infrastructure or software to a CEO or new customer right now? Channeling recent news announcements of backups that went horribly awry without actually mentioning them, Willmott suggested that if a software creator is worried about how they would perform in an instant demo or look under their infrastructure hood, “they need to go out and figure out what those worries are. Someone has to step up and own it and understand the parameters under which your software will work.”
A Time for Software Ethics
Perhaps now is the time for more discussion of software ethics and the role that business and startups can play in designing technology that supports human society. In the past year, there has been more worry that AI and predictive automation are displacing jobs at a faster rate than employment policy can adjust to how this impacts on our economies and on individual livelihoods. Fears of black box algorithms that replicate systemic discrimination have also been raised. And the continued gender and racial inequality within tech companies and in the investment of startups have demonstrably created toxic workplaces where sexual harassment, homophobia and racism is ignored. The growing concern from within tech circles and from wider society will hopefully drive a greater recognition of the need to design software ethically using principles such as those proposed by Willmott.