In February, I conducted an interview with a well-respected researcher-physician about their forward-looking technology, which uses deep learning to find genetic mutations in cancer cells and pairs the findings with potential drugs to neutralize these tumors as efficiently as possible. It is a revolutionary technology and a highly praised and well-funded company. Therefore, I was utterly surprised when he asked me if I could somehow convince everyday people that their technology is not evil. They are meaning well. This example is not so rare as you would think, that’s why we decided to share some ideas on how to build more trust in AI?

What to do about techlash?

Of course, it was not the first time that I encountered negative attitudes towards futuristic-sounding innovations, but it was surprising to experience it first-hand. Attitudes towards high-tech such as A.I. are shaped by all kinds of fears, many of them very well founded, fed by the logic of Hollywoodian sci-fi movies, that of the media and misunderstandings between the scientific arena and the public. Moreover, there is already a term for it: “techlash” – a growing opposition to Big Tech companies and all the fears surrounding automated information systems, artificial intelligence, and machine learning. These have become prevalent among citizens in many countries.

How prevalent, you ask? For example, to understand the sentiments towards AI across the globe, the Oxford Commission on AI and good governance conducted a survey with more than 150 thousand people. They were asked whether “machines or robots, often known as artificial intelligence” will “mostly help or mostly harm people in the next 20 years”. In which regions do you think people are the most against AI? Let me surprise you: US, Latin-America, and the EU.

In Europe, 43% think AI will be harmful and a somewhat less proportion, 38% believe it will prove to be helpful. High levels of skepticism run in European countries such as Spain, Portugal, and Greece, but the highest is in Belgium, where more than 50% of respondents expect the use of AI in decision making to be mostly harmful. And the latest happenings with high-tech won’t help either. We could hear a lot about violations of privacy, failures of machine learning systems, persistent data bias, short-sighted technical systems, automated decision making with „black boxes” that no-one understands and companies keeping their algorithms hidden.

It seems we have a lot of work to do. But how exactly could skepticism be mitigated and trust around AI increased?

It all goes down to control and trust in AI

In 2016, the UK government started a citizen dialogue in the topic: ethics of data science for government; and they found that those who had a low level of baseline awareness of how data science works, struggled to see the value of using computer analytics. The study emphasized the need to clearly articulate the implications, benefits and risks of the methods applied and most importantly, the need for information on the followings:

  1. How humans and machines interact
  2. Who has the control over the applied systems/methods
  3. Who bears responsibility for potential failures.

Those should be especially important considerations when talking about data science and AI to the public as well. In the case of the algorithm providing potential treatment options for patients, for example, it needs to be clearly articulated that the algorithm has very ‘deep, but narrow knowledge’ in an extremely specific issue, it functions as a device in the hands of doctors, physicians have complete control over its results, and finally, the decision about the potential treatment is made by the ONCOTEAM together with more doctors and the patients. Thus, the algorithm is only a component in the entire treatment universe.

How to ensure even more control? Through risk assessment

However, that might not be enough for ensuring algorithms are transparent, unbiased, ethical and not short-sighted in their decisions. Instead of acting post facto, when a given algorithm is already in the market, researchers, companies as well as regulators need to create certain “checks and balances” to ensure more “quality control” early on.

Such a control mechanism could be an early risk assessment – already in the research process. One of the latest developments in the research field is that certain journals require peer-reviewers to not only concentrate on research design, methodology and results, but consider broader impacts, both positive and negative, of a given research. That is already in place for the Association for Computing Machinery, for example. Moreover, the Association for Computational Linguistics and The Association for the Advancement of Artificial Intelligence asked reviewers to consider the ethical impacts of submitted research. This falls in line with the latest developments in the area of (bio)ethics, too, which requires researchers to consider risks early in their research not only post facto.

Moreover, some experts say that peer reviewers should require researchers to discuss meaningfully the means for mitigation of the risks, perhaps through other technologies or suggestions for new policies. Although these recommendations are only formulating in the AI universe, the hot debate around the issue, for example, in the European Union, is encouraging.

Following the circle of life – for AI systems

The next step after an algorithm passed the early risk assessment in the research process phase would mean another ‘checkpoint’ before launching it onto the market – and letting the algorithmic solution scale. Some experts are suggesting more regulatory oversight into what could be approved for use, and that would be more than welcome.

For example, the US Congress has been considering an Algorithmic Accountability Act, which would compel companies to assess the probable real-world impact of automated decision-making systems. There is even a case for creating the algorithmic equivalent of the FDA to preapprove the use of AI in sensitive areas, such as healthcare, education, policing, justice, and the workplace. The FDA already approves certain algorithmic solutions in the healthcare sector, but those are strictly concentrating for the already existing regulatory checkpoints such as safety, usability, or effectiveness, but the regulatory area for AI systems could and should be widened in the coming months. Regulatory bodies should even consider criminal liability for those who deploy irresponsible AI systems.

To go even further, algorithms that companies and governments deploy in the above-mentioned sensitive areas should be subject to audit and comprehension by outside experts throughout their entire life cycle. Many experts say that regulators should be allowed to run experiments on companies’ algorithms, testing for, say, systematic bias. And with that goes hand in hand the call for big tech companies to let outside experts get a peak into their algorithms and databases – and make the entire field more transparent. But that, unfortunately, not only goes down to risk assessment, but power, which will be the topic of another piece.

All in all, we believe that if all these means would be at place, our neighbors could also be easier ensured that data and AI will ultimately be their friend.

This is a condensed version of a conference talk given by Nora Rado at Women in Data Science (WiDS) Central and Eastern Europe 2021. For more on the conference, check it out here: https://www.youtube.com/watch?v=pW9qO4YKmNM