Shop    Info   Menu  ︎   


Let’s write the BOOK OF HUMANKIND

The digital possibilities of an egalitarian conception of humanity in the disjunctive discourse of philosophy and art

Information on the authors
MMMag. Dr. Stephan Klinger
Economist, philosopher and lawyer
stephan.klinger@wu.ac.at
DI Milan Mijalkovic
Artist, architect, and author
email@milanmijalkovic.com

Both Michel Foucault and the folk writers of Rumpelstiltskin recognised that knowing the name of a subject was the key to gaining power over them. In the vortex of the ongoing technological development of digital worlds, (Central) European thinking has focused a good deal of attention on the potential negative consequences of disruptive developments. The article below seeks to take a naïve1, positive look at the possible culmination of digital opportunities towards a new, egalitarian conception of humanity: a book of humankind, composed of the data of all measurable actions of all individuals, which, through the use of correlations, will generate recommended actions for individuals and their futures.

The measuring and marking of people in USA and India

From changes in lighting, seating arrangements, workplace design, and employee data up to the structured collection of data on their parents’ birthplaces -Fritz Roethlisberger and William Dickson measured, categorised, and photographed the working conditions of American workers in General Electric’s Hawthorne Works in the late 1920s and early 1930s down to the smallest detail possible with the means of data collection at the time. Posterity has preserved an effect of the same name: the “Hawthorne effect”. This is popularly understood as the discovery that human productivity is influenced not only by objective working conditions, but also by social factors. However, in all fairness, the idea of treating human resources as humans (and not as machines) already existed before these experiments. If you read through the records of the Hawthorne experiments, you cannot help getting the impression that Roethlisberger and Dickson desperately scoured their data to find a causal explanation for the meaning of “human relations” (other influencing factors - such as wage incentives2- received less attention). In any case, the “Human Relations Movement” triggered by these experiments gave rise to the birth of personnel management. A new conception of employees as humans had been established and would subsequently become the main business of thousands of emerging Human Resources departments.

We had to establish uniqueness across a billion of people”, said Nandan Nilekani (former chairman of the Indian Aadhaar Authority, which has issued digital IDs to 90% of the population in 10 years)

Name and address of parents, all relevant biographical data, iris scans of both eyes, measurement of facial morphology, as well as prints of all 10 fingers—all this data has been collected in India since 2009. Currently, more than 90% of the Indian population is enrolled and has received a 12-digit Aadhaar number (Aadhaar: Hindi for “base” or “foundation”). This data (which also includes scans of all essential documents such as birth certificates, etc.) is represented by means of a QR code on an identity card. This biometric identity system (the implementation of which was supported by the German TÜV) is currently regarded as the most advanced in the world3. Initially created as a voluntary means of legally proving one’s identity to help the poorest segments of the population gain access to financial services, the Aadhaar number has increasingly become a prerequisite for receiving public or banking services4

The Aadhaar authority recorded the greatest increase in Aadhaar numbers in the year in which the Indian central bank declared that rupee banknotes would be worthless within the course of a year as part of a currency reform. The exchange of the old currency notes could only be done within a reasonable period of time if one had a bank connection—and an Aadhaar number was required for this connection. Therefore, it seems that India has managed to collect almost all its citizens’ identity data within a period of 10 years. In contrast to China’s social credit system, these data links are stored in an accessible database (similar to the Austrian civil register) and can be accessed for a fee. On this basis, Indian companies will soon be able to automate many identification processes. Where European banks use a variety of forms and enquiries to legally establish their customers’ identities, the entrance camera of an Indian bank branch can identify customers by means of their facial morphology—and all their data will automatically pop up on their advisor’s screen before the customer even sits down in front of them. The former chairman of the UIDAI Aadhaar Authority, Nandan Nilekani, explained in a CNN interview the need for biometric data depth by stating that “We had to establish uniqueness across a billion of people”.

Uniqueness or validity is one of the cornerstones of empirical research. Through the use of manageable amounts of data, hypotheses are examined based on statistical quality criteria and then denied or verified: in this way, questions can be posed to reality and be tentatively answered in small segments using samples within the framework of defined constraints and exceptions. Ever since the Enlightenment gave us the courage to exercise our minds, we have been on the search for the causes or reasons for things.
Algorithms – Answers without knowing the question

In 2013, the American think tank “Council on Foreign Relations” devoted a special issue of its bimonthly journal Foreign Affairs to the topic of big data. In their article, Kenneth Cukier and Viktor Mayer-Schoenbergershowed the opportunities presented by the use of big data: instead of a hypothesis-based questioning of manageable and delimited amounts of data, they demonstrate the possibilities of seeking correlations within huge, unstructured, and messy data clouds. They trace, for example, the advances IBM has made in translation software back to their novel approach: instead of training a translation programme word by word, IBM digitised the transcripts of Canadian parliament sessions (which were available in English as well as French) and let the programme search for the best translation options. Google pursued a similar approach a few years later: algorithms searched for reliable correlations between different language versions of documents from corporate websites and EU institutions and the contents of Google Books. The complex intellectual process of language acquisition thus became a mathematical/statistical task—the result of which is that Internet translation programmes have becoming increasingly precise and accurate.

Cukier/Mayer-Schoenberger’s second example concerns the former mayor of New York City Michael Bloomberg (who owes his fortune to data analysis) and the efforts he took to make public administration more efficient: for the approx. 900,000 buildings in NYC, there were only 200 building inspectors to conduct on-site inspections of the buildings’ structural and fire safety. In 2012, Bloomberg commissioned a small team of statisticians to carry out a big-data analysis which was intended to serve as the basis for drawing up a new inspection plan. For its analysis, the team could access all the data of the various municipal authorities (tax department, police, fire department, clerk’s office, seismic institute, etc.). They simply searched for correlations within the data clouds: What are the most common occurrences before a building collapses or goes up in flames? What the analysis found was that the strongest negative correlation, besides the expected positive correlations such as building type and year of construction, was whether a building had a permit for exterior brickwork. These buildings showed the strongest negative correlation with previous fires. None of the characteristics recorded in the analysis actually caused the buildings to collapse or catch fire—they simply measured which indicators correlated with an increased risk of collapse or fire. This led, however, to an impressive result: after altering the inspection plan based on the indicators identified in the analysis, the percentage of action taken by building inspectors rose from 13% to 70%.

What all these examples have in common is that they did not look for causes, but for correlations in large amounts of unstructured data. The search through the data cloud did not begin or follow questions or hypotheses, but instead the strong correlations within the data itself were the question and the answer rolled into one. Today, we see these kinds of correlations in use when we, for example, type in the first few letters of our search terms in the Google search box and receive increasingly accurate suggestions for search queries. On the basis of previously searched items, the time of day, and many other datapoints, the algorithm calculates our desired search from the first few letters. How could this principle be extrapolated to other areas of life?

Predicting actions using the “Book of Humankind”

In such a Book of Humankind, we could read the similarities and consistencies of human actions.
We would be able to see that our actions are more effective if we follow the recommendations of the correlations.

Taking this line of thought further, the strongest (positive as well as negative) correlations of algorithmically combed data clouds could give us specific instructions for action (comparable to Google’s autosuggestion function in the search box). The correlations could, for example, indicate the best career opportunities for children, or potential romantic partners. It is interesting to note that dating agencies such as ElitePartner or Parship have found through long-term studies of their algorithms6  that similarities in status, attitudes/values, and social environment correlate very strongly with long-term partnerships (the parental algorithm in traditional societies probably operated according to a similar logic).

Greater predictability would probably require tremendous amounts of stored data and corresponding computing power on the part of algorithms. However, we can already imagine that a company like Apple, for instance, could store all the data of the users of its devices worldwide for a period of three years: all emails, text messages, social media postings, telephone calls, and even GPS position data, which, as we know, can provide sufficient elevation data to determine what floor a user is on in a high-rise building. A database such as this would probably be sufficient for the first step in creating the first edition of the Book of Humankind.

An egalitarian conception of humanity like that of the Book of Humankind connects the individual with all individuals.

In his major work Tristes Tropiques, the anthropologist Claude Levi-Strauss documented his research expeditions to various Indian tribes in the Amazon. At the end, he contemplates whether their primitive state is more desirable than the western civilised world and he ultimately agrees with Jean-Jacques Rousseau, who believed “that it would be better for the happiness of mankind to keep ‘a just mean between the indolence of the primitive state and the petulant activity of our [western, enlightened] egoism’”. To accomplish this, Levi-Strauss suggests creating a universal matrix which should include all the behaviour patterns of all peoples of the earth, once all of them have been researched. According to Levi-Strauss, this Book of Peoples would enable us to comprehend the natural behaviour patterns of civilised people and thus understand humanity as such: “As he moves about within his framework, man takes with him all the positions he has already occupied, and all those he will occupy”.

With the technological possibilities presented above, it would now actually be conceivable to create a Book of Humankind which depicts all the actions of every single individual. An algorithm would position the individual’s specific actions as a bundle of correlations in the Book of Humankind—there we would be able to “read” the future actions of individuals in the form of correlation strengths, in a similar manner to the way in which Google suggests the strongest correlations as search terms after we have entered just three letters.

The Book of Humankind will be written by the actions of all individuals and thus, in turn, affects the individuals by way of correlations. An egalitarian conception of humanity of this kind connects the individual with all individuals. When we recognise that following the recommendations of the correlations makes our actions more effective than the search for causes ever suggested, this will have a fundamental impact on our idea of the nature of humanity as such.

The architect and artist has this to say:

[…] it is the right of authority to be ‘loved by the Gods’—the choice of the god of chance, the drawing of lots, i.e. the democratic procedure by which a people of equals decides the distribution of places”, said Jacques Rancière in Hatred for Democracy.

And I, Milan Mijalkovic from Macedonia, reply:

Only when the multitude, the masses, the vast majority, also in terms of vast amounts of data, have sovereignty over chance, over randomness only then can there be proper governance, only then can there (finally) be fair rule. Randomness will finally be conquered. When Tyche, Fortuna, and the exalted gods of randomness have been foiled, then responsibility will be gone forever, and guilt will finally be gone.

In truth, we have endeavoured to avoid this: through our ancestors, through the observation of birds, stones, or stars. We have shifted responsibility to clouds, bones, oracles, kings, stock exchanges, polls, markets, or God himself. The people, as a whole, have tried it too and called it democracy, but guilt was constant and always there because randomness wanted it that way, since tyranny and randomness were one and the same. Now the decisions will come on their own because the people, because humankind is finally choosing the right system, the egalitarian system. But this system is not democracy; it’s statistics. If statistics makes it possible for us to make decisions before the democratic choice itself, when the right decision, the real solution is in our pockets, at our fingertips, then the decision will be universal. The decision will be 100% correct. And only then will the sick, the lazy, the ineffective, the unproductive, the anxious, the lost, the sinful, the indebted, the murderous, the secretive, the dishonest, and the devil himself be gone. There will no longer be the sick or the blind because statistics is the computer. It is a machine.

Flavius Philostratos, the sophist, knew:

For the gods perceive what lies in the future,
and men what is going on before them,
and the wise what is approaching.

It is not gods or men that foresee the approaching, but the machines. The wise ones are the computers, the statistics, the correlations, the data, and the archives. We now need to truly learn to love computers, learn how to sacrifice for them. Sacrifice our data! And then the gods will love us, and only then will there finally be peace.

The data belongs to everyone!
Data for peace.


  1. Immanuel Kant defined naivety in his Critique of Judgmentas the “eruption of the sincerity that was originally natural to humanity     and which is opposed to the art of dissimulation that has become our second nature” – Kant, Immanuel (1987): Critique of Judgment(Werner S. Pluhar, Trans.). Cambridge: Hackett Publishing Company, p. 336.
  2. Carey, Alex (1967): The Hawthorne studies – A radical criticism. Cited in Kieser/Ebers (2006): Organisationstheorien.
  3. Rao, Ursula; Nair, Vijayanka (2019): Aadhaar – Governing with Biometrics; South Asia: Journal of South Asian Studies, 42:3, p. 469-481
  4. Nilekani, Nandan (2018): Data to the People – India’s Inclusive Internet;Foreign Affairs, Vol 97, Nr 5, September/October 2018
  5. Cukier, Kenneth; Mayer-Schoenberger, Viktor (2013): The Rise of Big Data – How It’s Changing the Way We Think About the World; Foreign Affairs, Vol 92, Nr 3, May/June 2013
  6. Wegener, Jochen (2016): “Die Ressource gebildeter Mann wird knapp” – Interview with Arne Kahlke, former CEO of Parship and ElitePartner. In Die ZEIT. 28.04.2016


Published in:
Austrian Management Review No. 10

Presented at the

Pegaso International Conference / Malta 2020

Topic: Information Technology

Readings from the Book of Mankind 
full presentation 





2020






Copyright: Milan Mijalkovic, 2020