Two weeks ago Visa hosted the Early Tech Career Network Event Data Ethics and AI which was a fascinating insight into how businesses, researchers and policy advisors see AI being shaped by ethics.

Jessica Lennard, Director of External Affairs at Visa, kicked off the evening by placing ethics in the context of business. It has long been in the interest of a business to be seen on the right side of ethical lines. So how do businesses do that? Opinions on ethics can vary wildly: you may well have a different opinion to the people you share a home with or an office with. Put this on a global scale and the potential differences of opinion multiply rapidly. Further, ethics are a shifting concept: what was acceptable just 10 years ago is not necessarily acceptable now.

The fast-paced nature of technology makes this a particularly interesting area in which to consider ethics, as does the level of public interest. The industry is certainly engaging with ethics but there does not yet seem to be a consensus in approach. For example, last year Microsoft set up the AI and Ethics in Engineering and Research (AETHER) Committee and earlier this year Microsoft President Brad Smith met with Pope Francis to discuss the ethical use of artificial intelligence (among other matters). Earlier this year Facebook partnered with the Technical University of Munich to support the creation of an independent AI ethics research centre: the Institute for Ethics in Artificial Intelligence. The industry may not have reached a consensus but it appears to be listening.

Yet research from the Ada Lovelace Institute shows that the public has a lack of trust when it comes to the use of technology. Facial recognition technology in particular has been in the spotlight of late with the recent case against the South Wales police claiming that automated facial recognition technology was used in breach of human rights. The High Court ruled in favour of the South Wales police, though civil rights group Liberty have said their client would appeal the ruling.

In a study on facial recognition technology, the Ada Lovelace Institute found that the public does not trust the private sector with the technology. Similarly, when asked about the possibility of facial recognition being used for attendance monitoring in schools respondents were not positive. Conversely respondents were largely happy to use facial recognition technology to unlock their phones. Perhaps users are happier with the use of technology when they feel they are in control of how that technology is being used. And perhaps people are less concerned about the technology itself than the people using that same technology. This goes back to the issue of trust.

One way we can help create a culture of trust is to have clear standards. As Maria Axente, Artificial Intelligence Programme Driver and AI for Good Lead at PwC, noted, robotics and artificial intelligence do not naturally fit in to our existing systems for identifying good and bad so a new set of standards needs to be established. Mistakes, like those made by the New Orleans police department who used biased data to train AI, can teach us a lot about what such standards would need to address. And it’s not just private businesses doing research, independent research centres have been set up including the Centre for Data Ethics & Innovation which advises on government policy, as have not-for-profits like the Institute of AI who collaborate and share best business practices with legislators around the world. Communication between these thinkers, like that at the ETCN event, is vital to developing a new system of ethics and AI.