While the belief that technology will play an increasingly central role in 21st-century life is commonly accepted, the exponential increase in technological dependence during COVID-19 has caught even technophiles off-guard. The fact that governments around the world are almost entirely reliant on smartphones to track COVID-19—a deadly human disease—epitomises technology's newfound role. Surprisingly, however, institutions that have historically embraced innovation, such as universities, corporations and governments, are now struggling to provide services and products in a world that is mediated by tech more than ever. For example, the security firm Tanium found that 96% of the IT professionals they surveyed were “caught off guard” by security challenges posed by entire companies working remotely.
If even the most technologically-adept are struggling to live and work online, where does that leave people without smartphones, high-speed internet, or especially those in need of frequent in-person contacts, such as children with special needs and the elderly? And let’s not forget the 52% of Irish adults who lack basic digital skills. Will the needs of these citizens be simply ignored—cast as an inconvenient aside in an ever more techno-dependent society? The unfortunate reality is that even if the COVID-19 threat is neutralised, the technology-based inequalities that it has exposed harbinge a vastly more unequal future—that is, unless technology companies, governments, and consumers alike take steps to ensure that future innovations are human-centred.
Embrace ‘human’ complexity
The famous techno-historian Melvin Kranzberg once said, “technology is neither good nor bad, nor is it neutral.” While this statement may seem contradictory, one interpretation is that the moral quality of any technology cannot be known until that technology is applied. Just as a non-living virus needs a host to wreak havoc, technology only takes on meaning once introduced into the sentient and often unpredictable life-world. Thus, to rectify inequalities exacerbated by tech
(or rather, its application) we need to better incorporate social sciences like psychology, philosophy, and sociology into the tech design, implementation, and regulatory processes.
Much to the chagrin of technologists, social scientists do not produce clear-cut answers to the problems they study; they mostly devise theories. This is because the real world is messy, or perhaps more accurately, because humans are irrational. It is no wonder that the techno-determinist view that the world's most intractable problems can simply be solved with more or better technology is proven wrong time and time again. Facebook connecting 4 billion people has not succeeded in creating a global community; rather, it seems to be undermining democracy. The rollout of distance work and learning tools to stem the spread of COVID-19 may have slowed the disease, but it is contributing to feelings of isolation, depression, and possibly a rise in suicide. While tech fixes one problem, it often causes dozens more.
Put People First
How can we prevent future tech from having unintended, deleterious societal effects? The answer is simple: we should view humans has as an end and rather than a means of achieving technological progress. Tech should first exist for the benefit of humanity, not to generate profit or to prove that once imaginable innovations are now possible.
Below are three recommendations on this more human-focused tech regime can be realised:
- Predicting and mitigating the unintended effects of technology requires that social scientists be more readily involved in the tech development process. Such cooperation could take the form of joint-dialogue sessions between large multinationals such as Intel and Apple and independent social science research institutions like Trinity College Dublin (and TASC, of course). These conversations would centre on possible human effects of developing technologies, the ‘what ifs’ that often feature too little in preliminary design conversations. To avoid these dialogues becoming little more than CSR marketing tools, the social scientists (and especially ethicists) should be key stakeholders in design decisions with real influence over outcomes.
- Similarly, governments—often too easily led by the ‘dangling carrot’ known as FDI— would also benefit from involving more humanists in their regulatory and approval processes. While a dedicated civil servant will act in the public’s best interest, experts who empirically study society will be able to outline how certain technologies might play out in the social sphere. In truth, there should be a revolving door between faculties of social science and government departments like the Data Protection Commission.
- Finally, companies, governments, and social scientists alike would benefit from elevating the average technology user within the development and regulatory processes. It makes no ethical sense to ignore the people who are affected by technologies from its design. One does not background in software engineering to know how a certain programme might be used nefariously. Their real-world insight would be an effective antidote to the techno-utopian (tech can solve anything) views held by many in Big Tech.
Given the word constraints of this blog post, the recommendations outlined above are quite general. Each could and should be expanded into a blog post or report. The central theme, however, is that companies and governments must now take radical steps to make technology more human-centred. The inequalities exposed by recent technological shifts pale in comparison to those on the horizon. Companies would do well to consider that when no one trusts or much less can afford their products, their dazzling innovations will be for nought.
Tyler contributes to projects on trade, multinational corporations, and inequality. Tyler is also broadly interested in the impact of communication technology on society, especially as it relates to democratic participation and freedom of expression. He holds an MA in Global Communication from The George Washington University in Washington, DC, and a BA in Journalism and International studies from Elon University in North Carolina.