Why we need state control and quality checks for algorithms

, by Anja Meunier, Translated by Ivan Danević

All the versions of this article: [Deutsch] [English] [français]

Why we need state control and quality checks for algorithms
Photo: Anja Meunier

There is hardly a buzz word as ubiquitous as “Artificial Intelligence” (A.I.) at the moment. According to a Boston Consulting Group study, nine out of ten companies intend to integrate A.I.-based solutions in their business strategies in the next three years. So do we need state regulation of algorithms? Treffpunkt Europa editorial team member Anja Meunier is convinced: absolutely!

Artificial Intelligence – for some the epitome of progress, for others a dystopian nightmare. However, those who only associate humanoid robots with this word should be aware that Artificial Intelligence is far more than that, and that it has already entered the most intimate areas of our lives. More and more decisions are made by algorithms and calculation models instead of human beings. Credit scores, job application filters and the risk assessment of insurances are a few examples. What do these algorithms have in common? They make decisions that could have far-reaching consequences for individuals, and they do affect many people – yet, they are secret.

When algorithms make decisions

This can have serious consequences as, for example, with credit scores. Again and again there are errors, and then it becomes incredibly difficult for the affected to correct their records. How an algorithm has been written, why it makes a particular decision, and how mistakes can be rectified – all this can barely be answered, given that these algorithms are usually protected as business secrets.

Most algorithms today are not programmed in a strict rules-based framework anymore, they learn rules themselves on the basis of data – this is called machine learning. Thereby it’s not always easy to track which data has been utilised and how a decision has been made – not even for the programmers themselves. Decisions taken on the basis of flawed data then become a serious problem when they feed into an algorithm. It becomes even worse when structural biases in the data disadvantage certain individuals.

It’s is thus far more important to already avoid this when collecting the underlying data and before developing the calculation models. The great difference between human and machine bias is that we perceive decisions taken by a machine as neutral and objective. However, we often fail to reflect on the origin of the respective data, on the way it is gathered and utilised, and on how conclusive the resulting models truly are.

A good example are business sectors with gender imbalances, as seen with many engineering jobs, where the data is quite imbalanced. If an algorithm learns based on this data without any modification, human prejudices and injustices are being integrated into the – supposedly unbiased – algorithms and in this way, they’re being cemented rather than eliminated. For instance, an algorithm could learn that ‘Men are better for these jobs than women’. There are ways to avoid this beforehand and to review the algorithms retrospectively. However, when an algorithm isn’t accessible to the public, applicants or consumer advocates have no means to check the quality of the calculation models used.

In addition, the predictions made by machine learning models are often less accurate than they might seem at first glance. Statements such as “Our A.I. finds the most suitable candidates with a 90% accuracy rate” might sound impressive. But is 90% good enough to decide on the future of individuals? 10% of all people are thus being unjustly excluded from the very beginning. People with unusual CVs – those who stand out from the mass and could make a company more diverse and thereby according to recent studies more successful – do not conform to the average and are thus not identified as “ideal candidate”.

But there’s always a human behind it, isn’t there?

Often the argument put forward is that it’s still a human who takes the really important decisions. But which options are presented to this person? While recruitment officers today might still invite one or two unconventional candidates for an interview, a machine could eliminate them in the first round in the future.

For companies this can mean less diversity – for the affected individuals, in turn, it can have serious consequences. There still is discrimination of people with immigrant backgrounds, also structurally. What would happen in the future if these people had not only less but virtually no chances at all, because a widely used algorithm disqualified them automatically? The consequences of machine bias are even graver than the societal ones: machines will never be positively surprised.

Seeing how more and more algorithms are being applied everywhere, a scenario in which only people who are average in every respect get a job, health insurance, a bank loan or an apartment doesn’t seem very distant. Everything of course ’neutral’ and based on data.

The widespread application of algorithms in all areas of our lives will happen… and is partly already reality. But the future does not have to be all bleak. The argument that algorithms are free from prejudices is not entirely invalid – provided that data is chosen carefully and that software programming follows ethical guidelines.

And this is where we need to establish sound control of such algorithms – similar to safety checks for food or electrical devices. Inspection authorities could develop clear anti-discrimination guidelines and conduct stress tests to check companies’ compliance. Business secrets could be kept, while consumers could still have trust in the fairness of such practices. This would indeed mean market intrusion by the state and probably entail big efforts, but policy-makers should nevertheless not shy away from such measures. After all, our anti-discrimination laws should still have a meaning in the future.

Your comments
pre-moderation

Warning, your message will only be displayed after it has been checked and approved.

Who are you?

To show your avatar with your message, register it first on gravatar.com (free et painless) and don’t forget to indicate your Email addresse here.

Enter your comment here

This form accepts SPIP shortcuts {{bold}} {italic} -*list [text->url] <quote> <code> and HTML code <q> <del> <ins>. To create paragraphs, just leave empty lines.

Follow the comments: RSS 2.0 | Atom