Why should companies care about technology ethics?

Summary

  • Embedding ethics in tech systems is helpful to improve or mitigate challenges to safety, equity, fairness, representation, inclusion and more.

  • Technology has the potential to improve humanity, but when tech does the opposite, we need to understand why it happened — and how to intervene. 

  • Building purpose-led tech requires embedding the foundational values of an organization in decision-making.

Recent crises have led to at least one positive result: a renewed desire to improve ethical behavior by individuals, businesses and government agencies, including by embedding ethics in technology.

Whether building or buying technology, it’s critical for companies to consider how they choose their systems, devices and applications based on how they help, rather than hurt, humanity. Ethical behavior also involves educating employees about how their use of technology — including artificial intelligence (AI), machine learning, cloud, blockchain, etc. — may negatively impact other people, communities or the environment.

Embedding ethics in tech systems is crucial to improving or mitigating any challenges to safety, equity, fairness, representation, inclusion and sustainability. To build trust, companies should be aware of technology’s challenges, including:

  • The potential to displace people from their jobs, while failing to provide meaningful opportunities for them to learn new skills 
  • Instances where technology unintentionally demeans or harms a group of individuals 

  • Situations in which biased tech causes people to be denied access to basic needs and meaningful participation in the economy, services or communities

  • Cases in which data-hungry practices are fed by ghost labor (people who do task-based, content-driven work on a contractual basis — frequently for low pay).

Technology is often presented as a tool with the potential to improve humanity. But when tech does the opposite, we need to understand why it happened — and to intervene if possible. 

Therefore, when technology does go astray, should business and/or government accept responsibility and accountability for any harmful results, such as bias and inequity? Even more important, can they undo any harm that the tech may have caused?

For example, what can a company do if it inadvertently uses biased AI that denies a qualified individual a place at a university, a promotion at work or a mortgage to buy a house? What can governments do if a police department arrests the wrong person based on biased facial recognition? What can healthcare providers do if a medical practitioner gives a patient the wrong medication based on inaccurate prescription data in that patient’s digital chart? How should companies respond if consumers point out that using predominantly female voices in chatbots reinforces a perception of women as subservient?

Embedding a company’s values in decision-making

Building purpose-led tech requires more than just mitigating potential issues. It necessitates embedding the foundational values of an organization in decision-making. This could involve using corporate values to drive the investments, explorations and consumption of technologies. 

That requires aligning governance and executive stakeholder support with initiatives that historically might have been central only to the corporate social responsibility teams. It might also require making values a key part of the technology decision-making process — not just an afterthought.

This quandary boils down to one question: How can organizations manage tech responsibly? 

This quandary boils down to one question: How can organizations manage tech responsibly?

The answer: By intentionally considering the potential social impact of a specific technology when considering whether to invest. That includes assessing the technology’s potential financial or operational impacts.

Here’s the first step. Before deploying any device, software or system — whether AI, cloud, blockchain, augmented reality, etc. — a business should define a set of ethical criteria it can use to evaluate that technology. This criteria should include the organization’s values, as well as global human rights issues. 

Businesses should also ask specific questions about what data will be used to design a particular piece of technology, what data the tech will consume, how it will be maintained and what impact this technology will have on others.

It is important to consider not just the users, but also anyone else who could potentially be impacted by the technology. Can we determine how individuals, communities and environments might be negatively affected? What metrics can be tracked?

Businesses should also consider potential security and privacy impacts, as well as challenges to governance and maintenance. Do employees know how to operate the technology? Can professionals detect when the tech goes awry — either due to performance issues or to intentional subversions? Are employees aware of how this technology could be misused? 

Understanding these issues and assessing them based on a company’s ethical criteria can help alleviate these challenges — or, at least, make the trade-offs more visible before new technology enters an organization.

Best practices to achieve purpose-led tech

These guidelines can help you deploy purpose-led technologies that can benefit your company, your stakeholders and the planet:

1. Design ethical criteria that can be used to evaluate technology. This criteria should include the organization’s purpose, values, as well as global human rights issues.

2. Reinforce a corporate culture that’s transparent, is rooted in ethical technology, and considers the interests and welfare of stakeholders, not just shareholders. This directive should be led and modeled from the top.

3. Adopt practices that allow staff to choose technology that’s not only good for the business, but also supports ethical decision-making, improves the user experience, and protects personal data and privacy.

4. Review and update your supply chain to mitigate risks and find new sources of value.

5. Explain to the C-suite and board of directors why values are essential to investments and financial decisions. Also engage with your board and executives so they understand how your technology strategy supports both short- and long-term business goals. And design change management programs with executive sponsorship to emphasize their importance.

6. Upskill employees on new technologies and their impact in order to strengthen worker trust and help to reduce or eliminate their fears about tech.

7. Empower employees to speak up and identify opportunities for further improvements. Provide meaningful channels to hear employee perspectives and concerns and treat their input and suggestions with respect. 

How we use technology and data will determine their impact on our businesses and on humanity. Corporate and government leaders have the responsibility to confirm that their technology is used for the good of society — not just to attain higher profits or a stronger stock market. Being purpose-led — and using purpose-led tech — is an ongoing exercise that will require a culture change if it is to be implemented effectively long term.

A key takeaway: Evaluate all your technologies and give purpose-led tech the broad support it needs to succeed.

PwC’s Responsible AI

AI is bringing limitless potential to push us forward as a society — but with potential comes risks.

Learn more

Contact us

Maria Luciana Axente

Responsible AI Lead, PwC United Kingdom

Email

Ilana Golbin

Director and Responsible AI Lead, PwC United States

Email

Next and previous component will go here

Follow us