TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Cloud Native Ecosystem / Tech Life

Responsible Tech in a World of Continuous Delivery

The right questions for what responsible tech means in your next sprint, when you’re developing a new project, and when you’re rushing to release that new feature.
May 29th, 2020 7:53am by
Featued image for: Responsible Tech in a World of Continuous Delivery

To paraphrase Ferris Bueller: Tech moves pretty fast. If you don’t stop and look around once in a while, you could really mess it up.

Tech ethics, AI ethics: They make for compelling hashtags but they are quite esoteric statements that we don’t usually think about except for freshman gen-ed classes.

We work in a world that pushes for continuous delivery and automation. But we don’t take pause often enough to consider when and where a brake is needed. That’s a serious mistake. Building irresponsible tech risks you losing your employees, your customers and your reputation.

The New Stack is made up of observers and participants in the tech industry. In this feature, we don’t pretend to draw the lines or know the answers. But we talk to the people who help you ask the right questions for what responsible tech means in your next sprint, when you’re developing a new project, and when you’re rushing to release that new feature.

How Do You Define Responsible Innovation?

Responsible tech think tank Doteveryone ran a survey of over a thousand tech workers in the UK called “People, Power and Technology The Tech Workers’ View.” The results were clear: tech employees are fed up with the move fast and break things mindset. It’s not just a desire for a more considerate process. The survey’s findings started to explain what’s pushing a small subset of tech workers to quit their tech jobs due to ethical conflicts, including at places like Amazon and Google.

The survey’s main discoveries were that workers:

  • Need the guidance and skills to help navigate new dilemmas.
  • Have an appetite for more responsible leadership.
  • Want clear government regulation so they can innovate with awareness.

Eighty percent of respondents “believe companies have a responsibility to society to ensure their technologies don’t have negative consequences for people and society.” About a quarter of these people in tech had already experienced decisions that they thought would have negative consequences on society, while about 60 percent of those working in artificial intelligence (AI) did.

Between 18% and 26% of respondents had already left a job because of foreseeing a negative outcome to what they are building.

But in order to define strategies to build responsibly, you first have to define your objective.

“As it becomes more and more difficult to opt out of technologically mediated life, the ethics of how to do right with technology become more and more urgent.” — Dr. Caitlin McDonald. Leading Edge Forum

Sam Brown, former program manager at Doteveryone and now co-founder of Consequential Community Interest Company, says responsible tech starts with your big picture how you are codifying responsibility into your products.

She says this should start where most successful businesses start, with “exploring your ecosystem and building your bigger picture — the intention of the organization. Articulating what the organization is trying to achieve, and then developing organizational values and principles about how people should behave in executing that intention is the starting point.

She continued “Because if you don’t take the time to build your vision and understand your stakeholders and ecosystem then you’re more likely to develop something that isn’t in line with your big picture and doesn’t respect the communities you are a part of.”

Brown says the starting step that often gets missed is extending those behaviors and values into the design of a technology product.

She said, “Responsible product principles should be developed based on an organization’s core principles, clearly state the intention of the product, and continuously question who might be benefiting or who might not be. The role of these principles should be to give everyone in the organization a shared understanding and collective responsibility for consequences.”

After all, legal responsibility is something not all developers are yet considering. For Brown, it’s more about responsibility than ethics. Responsibility is about considering the consequences of technology — which can be intended or unintended. Then asking: What does that mean on a day-to-day basis, and how are we taking responsibility for those consequences?

She continued that responsible tech is continuous stakeholder mapping — who are the people and systems that are going to be affected by your technology decisions?

“Responsible tech starts with your big picture and how you are codifying responsibility into your products.” — Sam Brown, Doteveryone

Dr. Caitlin McDonald, digital anthropologist at the Leading Edge Forum, echos this by writing: “In a truly ethical framework, all stakeholders have mechanisms for influencing decisions about how they are acted upon.”

These stakeholders must share in an understanding of some sort of guiding principles.

“I find often that teams don’t really know where the locus of responsibly resides when creating tech that’s ethical,” McDonald said.

She sees ethics often lumped under compliance and legal matters. She continued that responsibility can only be achieved with collective accountability.

“Without accountability, you can build beautiful vision statements about how responsible or ethical your technology will be, but you have no means for actualizing that vision by allowing stakeholders to influence or correct the technology that impacts them,” McDonald said.

She continued that “Introducing mechanisms for accountability is what lifts ethical frameworks from blueprints into real systems — a truly responsible act.”

The figure included below from her position paper on sustainable digital ethics further illustrates how accountability can’t exist without the intersection of fairness, transparency and explainability — things that more and more AI projects seem to severely lack.

A simple Greek 2-d facade. The left pillar lists Fairness and its attributes. The Right pillar is Explainable. Within the house is Accountable and Transparent.

Finally, for Virginia Dignum, professor of AI at Umeå University in Sweden and author of the book “Responsible AI,” while she considers all through an AI lens, she thinks that perspective is quite transferable.

Responsible tech is “systems which are obviously developed to be aligned with legal and moral principles that are robust. And are built in a way that will do what it says on the box,” she said.

“And all of these should be verifiable. Audit, test, evaluate and verify the systems to confirm for ourselves that they do what we expect them to do.”

Dignum continued that responsible tech and these mechanisms should provoke trust in a brand and product.

“Responsibility is not only from the part of the developers and policy makers. As users and as citizens, we also have our responsibility. And part of that is your voice to emend responsibility from others. And none of us is doing that enough. We are too easy to accept what is thrown at us.” — Virginia Dignum, Umeå University

Quantifying Responsibility in COVID-19 Tracing Apps

Of course, when asked, she admitted that these governance mechanisms are rarely in place.

She said that her typical audience of AI software developers is usually meeting the technical requirements of a product. The legal ones, too.

“Unfortunately the ethical and verification requirements are not often taken to account nor are implicit when they are,” Dignum said.

She continued that the vast majority of organizations are “not explicit that things are designed to a set of values which are verifiable and explainable enough.”

Last week, a paper she co-authored published on establishing a “socio-technical framework for digital contact tracing.” They passed three COVID-19 contact tracing apps plus the European Commission’s recommendations through the same assessment criteria, including what is coming down from the commission, letters from ethical and scientific boards to governments, and the public as heard through media.

These are organized around evaluating three app considerations:

  1. The Impact on Citizens: This is a blend of safety, health, non-discrimination, accessibility, inclusion and freedom of association. It also covers things like data use and control, and preventing stigmatization of those infected.
  2. Technology: An emphasis on interoperability, security and privacy, and data-minimization. It takes note to recommend open-source code and methods but without open contribution.
  3. Governance: State ownership when possible. It’s voluntary to use. And it has a clear sunset clause — because how often do we delete apps from our phones?

After having reviewed them three national tracing apps up against this framework, Dignum said, “They are all quite bad.”

Below are the contact tracing initiatives the academic team evaluated and their compliance scores:

These apps and the EDPB guidelines were chosen as examples of how the framework can be applied to others.

The project realized “The COVID-19 pandemic is revealing two conflicting perspectives: governments need sufficient epidemiological information to manage the pandemic, whereas citizens while wanting safety are concerned about privacy, discrimination, and personal data protection. In order to ensure that the goals from both perspectives are achieved, transparency regarding the problems associated with collection and processing of personal data is essential.”

While this socio-technical responsibility framework is useful for the apps in the current pandemic, a lot of these considerations can and should be applied to all application development, especially those collecting personal and geo-locational data or anything AI.

In all responsible tech, transparency is key.

“The responsibility is not so much to be perfect from the first moment but it’s to be willing to measure how you are doing and to be willing to take steps to improve.” — Virginia Dignum, Umeå University

One of the biggest gaps in the tech industry is still around governance. Not just governance within an organization, but how the public looks into and can limit the power of those corporations residing in one of the most intimate parts of our lives — our phones. After all, as Dignum pointed out, a lot of the privacy regulations that are out there have been in place since the Second World War. Similarly post-9/11 airport security regulations don’t look like they’ll change or disappear any time soon.

She said, in order to fight a pandemic, we are willing to give up our privacy now to help, but we only want to do that temporarily.

Agile Practice of Consequence Scanning

Last year, Doteveryone released a manual for consequence scanning, an agile practice for responsible innovators. It’s about making sure that an organization’s products or services are aligned with its culture and values, but it’s also about reflecting on three different areas:

  • What are the intended and unintended consequences of this product or feature?
  • What are the positive consequences we want to focus on?
  • What are the consequences we want to mitigate?

It’s popular for organizations to have voluntary ethics teams that sometimes meet. Allowing employees to self-organize around themes they’re excited about is good, but it’s not enough. Consequence scanning is meant to be done in product teams and should include anyone involved in the day-to-day making of the product, anyone closer to the end-users, and any senior stakeholders, as well as any experts in security, infrastructure, risk, and compliance working on the product.

Each review should last no more than 45 minutes. Begin with your company’s vision, mission and values, and then brainstorm, dot-vote to prioritize, and discuss the perspective product or feature and its intended and unintended consequences.

Doteveryone identifies six commonly unintended — but not necessarily negative — consequences of digital technologies to consider:

  • An imbalance of benefits — causing a greater digital divide.
  • Changing norms and behavior of users — how generations are interacting with each other, parenting, and dating very differently.
  • Unforeseen uses — ie: the “like” thumbs up is now industry standard.
  • Environmental impact — ie: the energy that goes into building and using your product vs AI helping measure impact.
  • Displacements and societal shifts — ie: loss of jobs, online religious groups, and changing how people work.
  • Erosion of trust — security policies and blocking data breaches, not just your own org’s transparency but of your suppliers.

Sort into three categories:

  • Act — consequences within the control of the participants to act upon.
  • Influence — what are out of your control but you can influence the outcome of.
  • Monitor — it’s completely out of your control but they could affect your product so you need to know more.

Make sure to determine the scale of impact of any consequences. Log everything to ensure followup.

Hold these sessions more frequently when you’re moving and iterating fast at the vision stage of your product idea. Then, as your tech and user base becomes clearer, you should do it less frequently in the roadmap stage. Brown says it can be within your actual sprint cycle or whenever you’re introducing something new like deciding if you’re going to bring up a new feature. Then you prioritize what you want to mitigate.

Earlier this year, after backlash for doing business with the U.S. immigration authorities, Salesforce decided to hire a chief ethical and humane officer to help guide the company in making decisions about “complicated political issues.” They now use consequence scanning to for ethical analyses, often flagged up by employees. This practice sparked, among other things, a corporate policy that prohibits customers using Salesforce software to sell military-style weapons to private citizens. They’ve also created “protected fields” within AI products that can exclude any data like race from biasing results.

Feature image by Siggy Nowak from Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.