In 2018, you can guess the politics of many people by which newspapers they read, and you could also do that 100 years ago. Certainly some people, like me, read both the Wall Street Journal and the New York Times but if someone defaults to MSNBC or Fox News, you can estimate their voting record with high levels of accuracy.

But preferences don't mean much, it is getting people to act that counts, and that is much more difficult. It is fine if people say they are pro-vaccine in California or Washington or Oregon, for example, but if they are denying their children vaccines, surveys are irrelevant. Getting people to act means making sure the right people see what you want them to see. As in most technological advancements, marketing money leads the way. That's where Biased Algorithms come in. 

Despite the negative connotation the word "biased" has, you might as well accept that all algorithms are biased. As Ralph Müller-Eiselt, who heads the Bertelsmann Foundation's taskforce on policy challenges and opportunities in a digitalized world, phrases it; “Algorithms are as biased as the humans who designed or commissioned them with a certain intention. We should therefore spark an open debate about the goals of software systems with social impact.”

That's not postmodernist despondence, I hate that faux skepticism pseudo-philosophical garbage as much as the next person, there is no algorithm without a creator. What about A.I., Artificial Intelligence? That isn't going to be better, even algorithms designed with the best of intentions can produce bad results. 



Well, what if everyone is trying to bias algorithms in their own self interest? Is that bad? Perhaps not, perhaps it is only bad if everyone is thinking they are doing good while one group exploits that. Müller-Eiselt believes otherwise, he contends "transparent accountability" will save us but government doesn't really have transparent accountability, it just has political aims that it makes transparent. Do you really want government designing algorithms for anything ethical?

He argues "It is up to us to determine whether AI in education will be a catalyst for strengthening social equity – or for weakening it."

He's talking about education, which is even more terrifying than if he were talking about what brand of laundry pod you should buy. How can that be possible if humans are flawed? Or if we need to make algorithms about social justice? We can't do preventive risk assessments through neutral third parties after just declaring there are no neutral third parties.

There is one great catalyst for strengthening social equity, and it has turned more peasants into millionaires, and vice-versa, than any other mechanism throughout history: capitalism. So it goes with algorithms. If we bias them, a special interest is going to control that process, be they right-wing politicians or left-wing lobbyists or government grant committees. If we instead allow companies to create any algorithm they want, they will compete against each other. Facebook would probably be fine for news if it were a free-for-all, but instead they wanted to pick winners and losers based on the beliefs of executives at Facebook, so they got duped by Russians into helping one candidate by helping another. If everyone was competing for attention without government or Facebook or Google trying to decide for us how to learn or what sources to read, yes, some people would be exploited, but most would quickly know which algorithms to trust.

In 1918, people did not wake up each day and re-decide which newspaper to trust, they made their own choice based on experience - they knew which editor's algorithm they preferred. Sometimes people changed, sometimes editors did. In 2018 we don't want to abdicate that to any algorithm unless all algorithms are allowed to compete.

If biased algorithms are everywhere, they aren't ineffective. It's the only way they can be effective. It's when a biased algorithm is the only choice that we have a problem.