Algorithms in the Spotlight: Collaborative Investigations at Spiegel Online

Written by: Christina Elmer

The demand for transparency around algorithms is not new in Germany. Already in 2012, SPIEGEL ONLINE columnist Sascha Lobo called for the mechanics of the Google search algorithm to be disclosed1, even if this would harm the company. The reason: Google can shape how we view the world, for example through the autocomplete function, as a prominent case in Germany illustrated. In this case, the wife of the former Federal President had taken legal action against Google because problematic terms were suggested in the autocomplete function when her name was searched for. Two years later, the German Minister of Justice repeated this appeal, which was extended again by the Federal Chancellor in 2016: algorithms should be more transparent, Angela Merkel demanded.2

In the past few years, the topic of algorithmic accountability has been under constant discussion at SPIEGEL ONLINE – but initially only as an occasion for reporting, not in the form of our own research or analysis project. There may be two primary reasons why the German media began experimenting in this area later than their colleagues in the United States: On the one hand, journalists in Germany do not have such strong freedom of information rights and instruments at their disposal; on the other hand, data journalism does not have such a long tradition compared to the United States. SPIEGEL ONLINE has only had its own data journalism department since 2016 and is slowly but steadily expanding this area. It is of course also possible for newsrooms with smaller resources to be active in this field - for example through cooperations with organisations or freelancers. In our case, too, all previous projects in the area of algorithmic accountability reporting have come about in this way. This chapter will therefore concentrate on collaborations and illustrate which lessons we have learned from them.

Google, Facebook, Schufa – three projects at a glance

Our editorial team primarily relies on cooperation when it comes to the investigation of algorithms. In the run-up to the 2017 federal elections, we joined forces with the NGO AlgorithmWatch to gain insights into the personalization of Google search results3. Users were asked to install a plugin that regularly performed predefined searches on their computer. A total of around 4,400 participants donated almost six million search results and thus provided the data for an analysis that would challenge the filter bubble thesis – at least regarding Google and the investigated area.

For this project, our collaborators from AlgorithmWatch approached SPIEGEL ONLINE, as they were looking for a media partner with a large reach for crowdsourcing the required data. While the content of the reporting was entirely the responsibility of our department covering internet and technology related topics, the data journalism department supported the planning and methodological evaluation of the operation. Furthermore, the backup of our legal department was essential in order to implement the project in a way which was legally bulletproof. For example, data protection issues had to be clarified within the reporting and had to be fully comprehensible for all participants involved in the project.

Almost at the same time, SPIEGEL ONLINE cooperated with ProPublica to deploy their AdCollector in Germany during the months before the elections.4 The project aimed to make the Facebook ads targeting of the German parties transparent. Therefore, a plugin collected the political ads that a user sees in her stream and revealed those ads that are not displayed to her. For this project, SPIEGEL ONLINE joined forces with other German media such as Süddeutsche Zeitung and Tagesschau – an unusual constellation of actors who usually are in competition with each other, but one that seemed necessary in the public interest in order to reach as many people as possible. The results could also be published in journalistic stories, but the clear focus was on transparency. After two weeks, already around 600 political advertisements had been collected and made available to the public.

ProPublica’s Julia Angwin and Jeff Larson brought the idea of a collaboration to the annual conference of the not-for-profit association netzwerk recherche in Hamburg, where they held a session on algorithm accountability reporting. From the very beginning, the idea was developed both with technical and methodological experts from different departments in the newsroom of SPIEGEL ONLINE. The exchange with our previous cooperation partners of the NGO AlgorithmWatch was also very valuable for us in order to shed light on the legal background and to include it in our research. After the conference, we expanded the idea further in regular telephone conferences. Later on, our partners from other media outlets were also involved.

In 2018, SPIEGEL ONLINE is supporting a major project aimed at investigating an extremely powerful algorithm in Germany – the Schufa credit report, which is used to assess the creditworthiness of private individuals. The report should show how high the probability is that someone can pay his bills, pay the rent or service a loan. It can therefore have far-reaching implications for a person's private life and a negative effect on society as a whole. For example, it is conceivable that the score increases social discrimination or treats individuals unequally, depending on whether more or less data on them is available. Also, incorrect data from integrated sources or mix-ups could be fatal for individuals.

However, the underlying scoring is not transparent; which data is taken into account in which weighting is not known. And those affected do not always notice anything of the process. This makes Schufa a controversial institution in Germany – and projects like OpenSCHUFA absolutely vital for public debate on algorithmic accountability, in our opinion.5

The project is mainly driven by the NGOs Open Knowledge Foundation (OKFN) and AlgorithmWatch, SPIEGEL ONLINE is one of two associated cooperation partners together with Bayerischer Rundfunk (Bavarian Broadcasting). The idea for this project came up more or less simultaneously with several parties involved. After some successful projects with the NGOs AlgorithmWatch and OKFN as well as with the data journalism team of Bayerischer Rundfunk, SPIEGEL ONLINE was included in the initial discussions.

The constellation posed special challenges. For the two media teams, it was important to work separately from the NGOs in order to ensure their independence from the crowdfunding process in particular. Therefore, although there were of course discussions between the actors involved, neither an official partnership nor a joint data evaluation is possible. This example emphasizes how important it is for journalists to reflect on their autonomy, especially in such high-publicity topics.

Making OpenSCHUFA known was one of the central success factors of this project. The first step was to use crowdfunding to create the necessary infrastructure to collect the data, which will be collected later in 2018 via crowdsourcing. The results are to be jointly evaluated by the partners in the course of the year in anonymized form. The central question behind it: Does the Schufa algorithm discriminate against certain population groups, and does it increase inequality in society?

As of March 2018, the campaign was largely successful. The financing of the software could be secured within the crowdfunding framework.6 In addition, more than 16,000 people had already requested information to Schufa in order to obtain their personal data. These reports will later be the basis for the analysis of the algorithm and its effects.

Resonance and success indicators

Concerning their results, both the Facebook and the Google project were rather unspectacular and did not show the assumed effects. Political parties apparently hardly used Facebook’s targeting options and the much-cited Google filter bubble proved to be unmeasurable within the crowdsourcing in Germany. In any case, to us it was more relevant to increase literacy with algorithms amongst our readers and to illustrate their functionalities and risks.

The assumption that we have succeeded in making the topic more widely known can be supported by the reach of exemplarily articles. The introductory article at the start of the Schufa project reached a large audience of around 335,000 readers, the majority of whom, however, came to the article via internal channels like our homepage. This was different in the field report with our author's personal story, which was read by around 220,000 people. A fifth of them reached the article via social media channels, which is well above the average. So apparently, it has been possible to reach new target groups with this topic. The reading time was also clearly above normal – with an average of almost three minutes. In addition, the topic was widely discussed in the public and in many media, as well as at several conferences.

What about the impact on the everyday reality? As a first step, it was important for us to anchor the topic in the public consciousness. So far, we have not seen any fundamentally different way political actors deal with publicly effective algorithms. We hope, however, that such projects will ultimately increase the pressure on legislation and standards for transparency in this area.

In any case, more effort would be needed in this area. With the discussed projects we were able to work on specific aspects of relevant algorithms, but of course it would be advisable to focus much more resources on this topic. It's great news that the pioneering work of Julia Angwin and Jeff Larson will be developed through a new media organisation focusing on the social impact of technology, which can devote more attention to this topic. Further experimentation is very much needed, partly because there is still some scope for action in the regulation of algorithms. The field of algorithmic accountability reporting has only developed in recent years. And it will have to grow rapidly to meet the challenges of an increasingly digitized world.

Organising collaborative investigations

Working together in diverse constellations not only makes it easier to share competencies and resources, it also allows a clear definition of roles. As a media partner, SPIEGEL ONLINE can work as a more neutral commentator without being too deeply involved in the project itself. The editors remain independent and thus justify the trust of their readers. Of course, they also apply their quality criteria to the reports within such a project – for example, by always giving any subject of their reporting the opportunity to comment on accusations. Compared to the NGOs involved, these mechanisms may slow media partners down more than they are comfortable with, but at the same time they ensure that readers are fully informed by their reports – and that these will enrich public debate in the long term.

Addressing these roles in advance has proven to be an important success criterion for collaborations in the field of algorithmic accountability. A common timeline should also be developed at an early stage and language rules for the presentation of the project on different channels should be defined. Because after all, a clear division of roles can only work if it is communicated consistently. This includes, for example, a clear terminology on the roles of the different partners in the project and the coordination of disclaimers in the event of conflicts of interest.

Behind the scenes, project management methods should be used prudently, project goals should be set clearly and available resources have to be discussed. Coordinators should help with the overall communication and thus give the participating editors the space they need for their investigations. To keep everyone up to date, information channels should be kept as simple as possible, especially around the launch of major project stages.

Regarding the editorial planning, the three subject areas were challenging. Although the relevance and news value were never questioned in general, special stories were needed to reach a broad readership. Often, these stories focused on the personal effects of the algorithms examined. For example, incorrectly assigned Schufa data made it difficult for a colleague from the SPIEGEL ONLINE editorial team to conclude an Internet contract. His experience report impressively showed what effects the Schufa algorithm can have on a personal level and thus connected with the reality of our audience's lives7.

Thus, we tailored the scope of our reporting to the interests of our audience as far as possible. Of course, the data journalists involved are also very interested in the functioning of the algorithms under investigation – an interest that is extremely useful for research purposes. However, only if these details have a relevant influence on the results of the algorithms can they become the subject of reporting – and only if they are narrated in a way that is accessible for our readers.

Internally in the editorial office, support for all three projects was very high. Nevertheless, it was not easy to free up resources for day-to-day reporting in the daily routine of a news-driven editorial team - especially when the results of our investigations were not always spectacular.

Nevertheless, the topic of algorithmic accountability reporting is very important to us. Because in Europe we now still have the opportunity to discuss the issue in society and to shape how we want to deal with it. It is part of our function as journalists to provide the necessary knowledge so that citizens can understand and shape this scope. And as far as possible, we also take on the role of a watchdog by trying to make algorithms and their effects transparent, to identify risks and to confront those responsible. To achieve this, we have to establish what might otherwise be considered unusual collaborations with competitors and actors from other sectors.

What we have learned from these projects

  1. Collaborate where possible. Only in diverse teams can we design a good setup for investigating such topics and also join forces – an important argument given both the scarcity of resources and legal restrictions that most journalists have to cope with. But since these projects bring together actors from different systems, it is crucial to discuss the underlying relevance criteria, requirements and capabilities beforehand.
  2. Define your goals in a comprehensive way. Raising awareness for the operating principles of algorithms can be a first strong goal in such projects. Of course, projects should also try to achieve as much transparency as possible. At best we can check whether algorithms have a discriminatory effect – but project partners should keep in mind that this is surely a more advanced goal that requires the availability of extensive datasets.
  3. Implement such projects with caution. Depending on the workload and the day-to-day pressure of the journalists involved, you might even need a project manager. Be aware that the project timeline may conflict with the requirements of the current reporting from time to time. Take this into account in communicating with other partners and, if possible, prepare alternatives for such cases.
  4. Dedicate yourself to research design. To set up a meaningful design that produces useful data, you might need specialized partners. Close alliances with scientists from computer science, mathematics and other thematically related disciplines are particularly helpful for investigating some of the more technical aspects of algorithms. Furthermore, it may also be useful to cooperate with social and cultural researchers to gain a deeper understanding of classifications and norms that are implemented in them.
  5. Protect the data of your users very carefully. If algorithms are to be investigated, you may use data donations from usersin order to consider as many different cases as possible. Especially in such crowdsourcing projects, legal support is indispensable in order to ensure data protection and to take into account the requirements of the national laws and regulations. If your company has a data protection officer, involve them in the project early on.

Works Cited

Sascha Lobo, ‘Was Bettina Wulff mit Mettigeln verbindet’, SPIEGEL ONLINE, September, 2012.

Sorge vor Kartell, ‘Maas hätte gerne, dass Google geheime Suchformel offenlegt’, September, 2014.

Fabian Reinbold, ‘Warum Merkel an die Algorithmen will’, SPIEGEL ONLINE, October, 2016.

AlgorithmWatch: Datenspende BTW17, September 2017

Julia Angwin and Jeff Larson, ‘ Help Us Monitor Political Ads Online’, ProPublica, September, 2017.

OpenSCHUFA project website www.openschufa.de

OpenSCHUFA crowdfunding framework https://www.startnext.com/openschufa

Von Philipp Seibt, ‘Wie ich bei der Schufa zum "deutlich erhöhten Risiko" wurde’, SPIEGEL ONLINE, March, 2018.

subscribe figure