Digital Ethics – The Next Catalyst for Trust in Technology


Writing down codes of ethics on a bit of paper isn’t necessarily enough to change behaviour. As the MIT Technology Review points out, establishing ethical standards doesn’t necessarily change employee outcomes. It cites a study by North Carolina State University that found asking software engineers to read a code of ethics does nothing to change their behaviour, yet learning from past mistakes does.

To achieve digital ethics and responsible technology, the industry will need to develop a system that incorporates a mix of culture, investment, regulation and education. And by education I mean to say that tech will suffer from a lack of the humanities more than ever in the era of digital ethics. Mozilla’s head Mitchell Baker describes this well:

“But one thing that’s happened in 2018 is that we’ve looked at the platforms, and the thinking behind the platforms, and the lack of focus on impact or result. It crystallised for me that if we have Stem education without the humanities, or without ethics, or without understanding human behaviour, then we are intentionally building the next generation of technologists who have not even the framework or the education or vocabulary to think about the relationship of STEM to society or humans or life.”

In an increasingly complex environment of misinformation, data breaches and bias, digital ethics will guide the right people to ask the right questions at the right time. An ethics approach can eliminate confusion and pinpoint disagreements or conflicts of interest. Most importantly, it helps us to value the ‘other’ – other viewpoints, other people, other communities that are impacted by the global disruption of the technology industry.

This doesn’t just mean brushing up on our moral philosophy with Kant, Confucius and Aristotle. It means hiring the best people for the types of challenges that we never faced in the last two decades, from the new philosophy MA graduate to the next Chief Ethics Officer. It means using established codes such as international human rights law to help guide AI systems. It means evaluating if we have the right systems in place so we’re ready to pick up the pieces when tech moves fast and breaks things – or perhaps, to ensure it breaks fewer things.


Please enter your comment!
Please enter your name here