artificial intelligence and academia: catch me if you can

Already few years ago, the book of Cathy O’Neil “weapons of math destruction” opened fire on the increasing dangers of computer-algorithm-taken decisions. The ever increasing capacity of computing power has given exponential access to artificial intelligence/machine learning solutions for many situations with clear societal relevance. The question is not anymore if too-fast-for-the-human-eye computers created (or powered) the financial crisis of 2008, but up to which level similar computer algorithms are creating discrimination and social havoc. Providing examples ranging from education to policing, passing by availability of health care, the writer shows the penetration of algorithms in our current lives, and their negative impacts.

Academy is not known to be the fastest human process, but eventually catches up with relevant societal discussions. So (finally) a mayor journal has paid attention to the matter. The last issue of the Philosophical Transactions of the Royal Society is called “the growing ubiquity of algorithms in society: implications, impacts and innovations”. Without having read the full articles, meanwhile I wait for the reprints to come, I wonder about some questions.

For example, what could possibly mean the author of “How should we regulate artificial intelligence” when he says that “the current system (or regulation) can work effectively if the producers of AI technology can provide sufficient transparency in explaining how AI decisions are made”. So… if a given AI package sold to a school lead to exclude students based on ethnic-related characteristics… the problem will be solved when the makers of the software explain why the software does what it does?

More positively, I also wonder about the concerns that the writer of “algorithmic accountability” raises regarding data monopolies and the challenge to democracy. Even though I wonder what can be added to the by now well known examples coming from the last USA election.  Last but not least, I am very much looking forward to the reprint of “Is privacy privacy?” where “a review of the landscape of normative and technical definitions of privacy (…) with examples of gaps between the definitions” is promised.

All the same, I keep the impression that the public (or academic) discussion here is still running after the facts. It is true that research takes time and effort, and it is simply impossible and not even desirable to have scientists commenting on the fashion of the day, but it seems also true to me that to regulate every other new technology we need not to discover the wheel again. By now enough cases of the misuse and abuse of data science have been provided. Do we really need to reframe legal frames? Is it not possible to judge any abuser of statistics as we would judge any other abuser? In the end, I don’t really care about the mechanisms of crime, but about their effects.

But as I said, I’m waiting for the reprints… perhaps these somehow skeptical lines will turn in something more positive. We’ll see.