Controlling Algorithms in Big Data Capitalism: Policy, Practice or Praxis?

In 1960 German philosopher and sociologist Herbert Marcuse wrote a short essay titled De L’Ontologie a la Technologie. Les Tendances de la Societé Industrielle which was translated into English in 1989. It first appeared in Critical Theory and Society: A Reader (edited by S.E. Bronner and D.M. Kellner) under the title From Ontology to Technology: Fundamental Tendencies of Industrial Society. In this short piece, Marcuse essentially develops a brief critical theory of technology that will form the foundation of his famous book The One-Dimensional Man. His theory is not dismissive of technology, but only about its neutral appearance. For Marcuse, the promotion of neutrality is a political project of domination and control in the hands of powerful actors. In the essay, he writes:

“There is now one dimension of reality which is, in a strict sense of the word, a reality without substance, or rather, a reality in which substance is represented by its technical form which becomes its content, its essence. Every signification, every proposition is validated only within the framework of the behaviour of men and things – a one-dimensional context of efficient, theoretical, and practical operations.”

Fast-forward to our present time and the story sounds quite familiar. It is not our human essence, or substance, which matters in the digital economy. Data traces in the digital realm matter. Massive amounts of data are validated and represented in the technical form of big data analytics. Behavioural choices are narrowed down into pre-calculated options that enable the highest profit margins. Pre-calculated options enable decisions to be made and products to be seamlessly advertised and purchased. Only major companies that use algorithms have some degree of control over algorithms. At the same time, the inner workings and measures for establishing information control and the extraction of value are protected by patents, copyrights and trade secrets. The use of algorithms expands in areas such as high-frequency stock market trading, self-driving cars, industrial production, ICTs and the internet-of-things.

The new techno-economic dominance is driven by major companies with very little oversight. The companies that deploy algorithms hope to reduce operational costs and increase profit rates. On the other hand, the side-effects of autonomous algorithms can be harmful for humans and society in many, often unpredictable, ways. For example, distribution of fake news, faulty navigational algorithms in self-driving cars, technological unemployment, enormous power consumption, and digital waste. The introduction of machine learning algorithms and artificial intelligence promises a new, technologically mediated humanity. The question that is never asked is: do we really want that? And even if we do not, what can we actually do about it? These questions form the central contradiction of big data capitalism. They pose a new type of challenge for political processes, democracy, and human liberation.

There are three possible ways of establishing a certain level of democratic control and oversight: policies, improved practice, and praxis. The policy based solution relies on a set of existing political subjects: nation-states and trans-national entities such as the European Union. In the last few years, the EU has started multiple anti-trust investigations of Alphabet inc. Three different cases are focusing on Google’s comparison shopping service, pre-installation of Google’s applications and services on Android OS, and restriction of third-party websites from displaying search ads from Google’s competitors. The ultimate goal of these legal struggles is to create more efficient market competition within the Digital Single Market of the EU. Simultaneously, some authors from the United States have even suggested creating a new regulatory agency with a specific task of regulating algorithms. Such an institution would be comparable to the Federal Drug Agency (FDA) and would regulate algorithms based on the level of harm they create for humans and society.

The second solution is to focus on the knowledge and practice that is required to create algorithms in the first place. Such a solution is primarily an outcome of the rapid development of computer science and engineering. The aim is to improve the scientific practice and its practical implementation through raised awareness of the potential harms of algorithms. Most recently, the international Association for Computer Machinery (ACM) issued a statement on algorithmic transparency. In a nutshell, the statement asks for seven principles of algorithmic transparency and accountability: awareness, access and redress, accountability, explanation, data provenance, auditability, validation and testing. While positive on the surface, these suggestions will not alleviate the problems caused by commodification of knowledge. They will only reinforce the already existing practices of instrumental science.

Policy-related and practice-related options operate under a limited horizon of the existing institutions of modernity: states, markets and specialisation of science. On the other hand, algorithms operate on a scale that is not bound by modern political subjects and specialisations. Algorithms are trained to find the most-efficient ways of performing given tasks. This means that they often transcend the operational limits of these institutions. The algorithms operate on a scale that affects humanity as a whole. To date, the human race is not able to tackle such challenges successfully. Climate change and the destruction of the natural environment are a constant reminder of that.

Marcuse posited that domination imposed by technicity is twofold: control of nature and control of man. In the age of big data capitalism, the new source of domination is control of information. Yet the message is the same. Marcuse reminds us that there is much more to humanity than calculation and efficiency in the service of markets and profit. We must never stop thinking of alternatives. Once we stop conceiving alternatives, we stop being human and surrender to technical control and dominance. The common goal is simple, almost banal, and yet extremely difficult to achieve: liberating man, nature and information from repressive technical domination. The means and ways of doing that are yet to be determined. The first requirement is awareness of the basic contradictions of big data capitalism: an understanding based in praxis.

 

[Image credit: The binary code. Image by The Waving Cat taken at Buenos Aires Museum of Modern Art.]


More articles

More Articles