With regard to medicine, researchers weigh the risks and benefits when AI is used to detect tumours in digital X-ray images, select treatment options for acute chest pain and draw conclusions from comprehensive records on people’s health. Key issues are reliability, transparency, representativeness, division of responsibilities – and trust. The aim is often to complement the good qualities of humans with the strong search ability of machines.
Transparency may increase the prospect of trust
An issue that interests Stefan Larsson is trust and people’s experiences. Trust is a key concept, not least in the healthcare sector.
“The whole approach is based on trust. The patient, who is usually in a very weak and vulnerable position, allows treatment and interventions which may be considered very intrusive and which are carried out by individuals who, because of their profession, are in a strong position”, says Stefan Larsson.
Is trust jeopardised when the tools gradually become even more difficult to assess for ordinary people? A lot of research is taking place in this area. According to Stefan Larsson, not every patient needs to understand the rationale of each computer-generated recommendation, but exactly how information and decision-making are balanced is a key issue for the trustworthy use of AI.
“Transparency is an element to understand better in relation to trust. We are trying to understand which part of the process is most important in terms of transparency, including explainability of the reasoning behind an individual decision or accountability in the whole system.”
One risk is that the approach becomes skewed, putting certain groups at a disadvantage...
Another recurring issue in these and other projects is bias, or "social bias" as it is sometimes referred to. Awareness of the risk of systematic distortion in relation to applied AI systems and machine-learning is relatively new, according to Stefan Larsson.
Just four years ago, an American study concluded that commercial facial recognition software proved to be considerably more precise if the face belonged to a white man compared to a dark-skinned woman.
Another study has shown that software for detecting skin cancer works better on light skin than dark skin. A third study has shown how the use of a risk assessment algorithm by US courts automatically and wrongly assessed the risk of recidivism to be higher among African Americans than white Americans.
“The policy response to these types of problems is often to call for more representative data for all types of groups. Most recently, this was seen in the EU Commission’s proposal for an AI act and in recommendations from the WHO. But the question is still when is it a solution to the problem, or if it may cause new challenges.”
…On the other hand, there is a risk that properly programmed AI reinforces injustices
As computer programs become increasingly better at reflecting reality to some extent, the next problem arises: do we want to reproduce biases in society, even if they happen to be correct? Do we want job advertisements for highly-paid jobs to be targeted to men, who more frequently search for highly-paid jobs and have higher wages? The consequence of a proper reflection of the state of affairs is that AI not only reflects injustices, but even risks reinforcing them.
“This issue has a different character. Social structures may be the source of the problem here. It raises a more normative issue that does not necessarily have a technical or optimised solution within reach”.
By contrast, people have become increasingly aware that cultural and social aspects need to be incorporated at an early stage.
“It is no longer sufficient to realise in the final phase that it would have been good to have an ethicist in the project. It appears that a multidisciplinary approach is required in order to build good AI products.”
Translation from Swedish: Shawana Badat
Version in Swedish at lu.se