Friend and Foe: How Technology Impacts the Way We Generate and Consume Information

By Natalie Schnelle - 21 January 2021
Friend and Foe: How Technology Impacts the Way We Generate and Consume Information

This interview was conducted by the Global Governance Futures – Robert Bosch Foundation Multilateral Dialogues, which brings together young professionals to look ahead 15 years and recommend ways to address global challenges.

1. There is a great deal of media coverage on the rise of artificial intelligence (AI). What is the impact of AI on media companies?

The impact of AI on media companies is two-fold, as they are both users and producers of artificial intelligence.

AI technologies are at the core of how search engines and social media platforms reproduce the information created by traditional media companies. Since the majority of today’s news consumers access stories via social platforms, content is often curated using algorithmic selection rather than editorial choice. In practical terms, this means that those researching and writing about the news have no control over the machine learning code that determines a piece’s visibility on platforms’ websites.

At the same time, AI has become an increasingly popular tool in the journalistic profession. Major media companies employ a slew of machine learning engineers and data scientists to help journalists tie together different streams of data and make predictions about the future – be it about the future US President or the location of the next Corona outbreak. This means greater interdisciplinary cooperation, new sets of skills and more complex reviewing processes, since machine learning models need to be made transparent and comprehensible for both editors and readers.

2. While automation in manufacturing is widespread, automated journalism is still relatively rare. Do you think we will see more data-driven news generation in the coming years? What are the benefits and downsides of automated journalism?

Automation in the field of manufacturing is possible because making products with machinery can be split into so-called ‘well-defined tasks.’ Their input, output and context are unambiguous. However, when crafting a piece of journalism, the tasks involved are often more difficult to break down. They apply implicit knowledge, make connections between different subject areas and structure unclear contexts, all of which makes them less suitable for automation.

Nevertheless, while people will remain at the center of journalism, the repetitive, “well-defined” tasks of journalists will become increasingly automated, allowing for a stronger focus on the creative and more complex aspects of the job. In this vein, major publishers have already started employing automated systems for fact-checking, tracking breaking news and creating data visualization, among other tasks.

GGF 2035It is technically possible to generate whole news wires or market reports with data-driven machine-learning software, which is one of the major advantages of its accelerated research pace. A machine can shift through much more data than a human. However, machine-learning (ML) software can only rely on the data it has been fed and there are limitations to the kinds of data it can digest. Such data must be electronically accessible and provided in a structured format that is readable to a computer, for example. These prerequisites automatically exclude many alternative sources of information. What is more, data used to train ML programs always pertain to the past. The type of information we have available to shape today’s ML-enabled news-generating programs therefore depend on past evaluations of what data should or should not be stored for later use, sometimes by completely different stakeholders, revealing a significant downside of automated journalism.  

Given the growing demand for plain news bytes on the go, automated, data-driven journalism will likely increase. But it cannot replace the type of investigative journalism enriched by interviews and a writer’s personal experience which so often leave us touched and pensive – as long as demand for it remains as well.

3. The use of big data is becoming increasingly common in business, and many companies are aiming to be more data-driven. Do you think the ability to interpret and analyze data will become one of the essential skills for many professionals in the future? Is it important for people to become more data literate, and how can this be achieved?

Technological advances in collecting and storing data have led to new business models that require a range of new technical skills – from data storage and integration to data science and analytics.  But these competences alone are not sufficient to make big data projects a success. Equally important is an understanding of the human condition that is based on the study of the social sciences, the humanities and history among others, so as to give these projects a purpose and direction. Therefore, for value creation, the most essential element will lie in the ability to combine these two skill sets and to collaborate with colleagues of different backgrounds.

Independent of that, we have an interest in becoming more data literate as citizens. For one, the most powerful machine learning programs are based on our personal data. If we are to maintain sovereignty over information about ourselves and possibly challenge how it is used by others, at the very least we need to be able to read and understand this data. In addition, many decisions about and around us are made based on or are influenced by big data, including which news article we will read next or how our newsfeed is curated. We are not at a dead end though, where we are forced to simply react to the status-quo.

The data that is gathered by social media platforms and the parameters used to analyze it are not set in stone, but are continuously adapted to shifting interests – ideally, in a way that best serves humanity. For this reason, we need a broad discourse and multiple channels of influence that enable citizens to actively shape outcomes rather than serve as mere objects and consumers of data. To meaningfully participate in such a debate, a sound foundation in data literacy is necessary. To do so requires not only a stronger emphasis on data science and ethics in schools, vocational institutions and universities, but also novel life-long learning concepts that enable continuous education opportunities for every generation, which could be (co-)funded by the state.

4. What are your predictions concerning the future of media and information landscape and what do you see happening to traditional media?

The way people access media has fundamentally changed. While traditional media outlets continue to be the primary content providers, it is platform companies who channel this information to audiences. As a result, platform companies profit disproportionally from advertising revenue at the expense of traditional media, who consequently lack resources to fund their journalism. Under this economic pressure and coupled with new opportunities for citizen journalism, I predict a further fragmentation of the media landscape.

Large and renowned publishing houses can build on legacy revenues in order to invest in state-of-the art digital technologies, making them able to successfully adapt to the current transformation. However, smaller, traditional publishers often do not possess expensive technological skills, and are thus more likely to fall behind. On the other hand, new online media players, who have previously been faced with editorial or commercial gatekeeping mechanisms, leapfrog into the digital space to cater to an increasing number of niche audiences with bespoke content on specific focus areas or by addressing particular political views.

But it is important not to forget that governments and societies have the power to shape tomorrow’s media and information landscape. Regulatory measures, especially in the field of competition and algorithmic transparency, can provide a more even playing field among media stakeholders. Lobbying – in both directions – will likely increase in the policy arena, requiring a public discourse with expert input.

 

 

Natalie Schnelle is Senior Strategic Consultant with the Government Affairs Department of SAP, Europe’s largest software company, and a Global Governance Futures 2035 fellow. Views expressed in this interview are her own.

Photo by Markus Spiske from Pexels

Disqus comments