Concerns about human agency, evolution and survival
,p>A clear majority of the responses from these experts contained material outlining certain challenges, fears or concerns about the AI-infused future. The five most-often mentioned concerns were: .
1) the use of AI reduces individuals’ control over their lives; .
/p>2) surveillance and data systems designed primarily for efficiency, profit and control are inherently dangerous; .
/p>3) displacement of human jobs by AI will widen economic and digital divides, possibly leading to social upheaval; .
/p>4) individuals’ cognitive, social and survival skills will be diminished as they become dependent on AI; and.
/p> 5) citizens will face increased vulnerabilities, such as exposure to cybercrime and cyberwarfare that spin out of control and the possibility that essential organizations are endangered by weaponized information. .
/p>A few also worried about the wholesale destruction of humanity. The sections of this chapter will cover experts’ answers tied to these themes.
The use of AI reduces individuals’ control over their lives
/p>Autonomous systems can reduce or eliminate the need for human involvement in some tasks. Today’s ever-advancing artificial narrow intelligence (ANI) tools – for instance, search engines and digital “agents” such as Siri, Alexa and Cortana – are not close to reaching the goal of human-like artificial general intelligence (AGI).
/p>They are, however, continually becoming more powerful thanks to developments in machine learning and natural language processing and advances in materials science, networking, energy-storage and hardware capabilities.
/p>ANI is machine intelligence that equals or exceeds people’s abilities or efficiency at a specific task. For years, code-based tools in robots and other systems have performed repetitive tasks like factory-floor assembly activities.
/p>Today, these tools are quickly evolving to master human traits such as reason, logic, learning, task-performance and creativity. Today’s smart, networked, software-equipped devices, cars, digital assistants and platforms, such as Google search and Facebook social mapping, accomplish extremely complex tasks.
/p>The systems underpinning today’s global financial markets, businesses, militaries, police forces, and medical, energy and industrial operations are all dependent upon networked AI of one type or another.
What is the future of humans in an age of accelerating technological change?
/p>Many experts in this canvassing said that as AI advances human autonomy and agency are at risk. They note that decision-making on key aspects of life is ceded to code-driven tools.
/p>Individuals who function in this digital world sacrifice, to varying degrees, their independence, right to privacy and power over choice. Many of the experts who worry about this say humans accede to this in order to stay competitive, to participate socially and professionally in the world, to be entertained and to get things done.
/p>They say people hand over some control of their lives because of the perceived advantages they gain via digital tools – efficiency, convenience and superior pattern recognition, data storage, and search-and-find capabilities. Here is a selection of responses from these experts that touch on this:
/p>An anonymous respondent summed up the concerns of many, writing, “The most-feared reversal in human fortune of the AI age is loss of agency. The trade-off for the near-instant, low-friction convenience of digital life is the loss of context about and control over its processes.
/p>People’s blind dependence on digital tools is deepening as automated systems become more complex and ownership of those systems is by the elite.”
/p>Baratunde Thurston, futurist, former director of digital at The Onion and co-founder of comedy/technology start-up Cultivated Wit, said, “For the record, this is not the future I want, but it is what I expect given existing default settings in our economic and sociopolitical system preferences. …
/p>The problems to which we are applying machine learning and AI are generally not ones that will lead to a ‘better’ life for most people. That’s why I say in 2030, most people won’t be better due to AI.
/p>We won’t be more autonomous; we will be more automated as we follow the metaphorical GPS line through daily interactions. We don’t choose our breakfast or our morning workouts or our route to work.
/p> An algorithm will make these choices for us in a way that maximizes efficiency (narrowly defined) and probably also maximizes the profitability of the service provider.
/p>By 2030, we may cram more activities and interactions into our days, but I don’t think that will make our lives ‘better.’ A better life, by my definition, is one in which we feel more valued and happy.
/p>Given that the biggest investments in AI are on behalf of marketing efforts designed to deplete our attention and bank balances, I can only imagine this leading to days that are more filled but lives that are less fulfilled. To create a different future, I believe we must unleash these technologies toward goals beyond profit maximization.
/p> Imagine a mapping app that plotted your work commute through the most beautiful route, not simply the fastest. Imagine a communications app that facilitated deeper connections with people you deemed most important. These technologies must be more people-centric.
/p>We need to ask that they ask us, ‘What is important to you? How would you like to spend your time?’ But that’s not the system we’re building. All those decisions have been hoarded by the unimaginative pursuit of profit.”
/p>Thad Hall, a researcher and coauthor of “Politics for a Connected American Public,” added: “AI is likely to have benefits – from improving medical diagnoses to improving people’s consumer experiences. However, there are four aspects of AI that are very problematic.
/p>1) It is likely to result in more economic uncertainty and dislocation for people, including employment issues and more need to change jobs to stay relevant.
/p>2) AI will continue to erode people’s privacy as search becomes more thorough. China’s monitoring of populations illustrates what this could look like in authoritarian and Western countries, with greater facial recognition used to identify people and affect their privacy.
/p>3) AI will likely continue to have biases that are negative toward minority populations, including groups we have not considered. Given that algorithms often have identifiable biases (e.g., favoring people who are white or male), they likely also have biases that are less well-recognized,
/p> such as biases that are negative toward people with disabilities, older people or other groups. These biases may ripple through society in unknown ways. Some groups are more likely to be monitored effectively.
/p>4) AI is creating a world where reality can be manipulated in ways we do not appreciate. Fake videos, audio and similar media are likely to explode and create a world where ‘reality’ is hard to discern.
/p>The relativistic political world will become more so, with people having evidence to support their own reality or multiple realities that mean no one knows what is the ‘truth.’”
/p>Thomas Schneider, head of International Relations Service and vice-director at the Federal Office of Communications (OFCOM) in Switzerland, said, “AI will help mankind to be more efficient, live safer and healthier, and manage resources like energy, transport, etc., more efficiently.
/p>At the same time, there are a number of risks that AI may be used by those in power to manipulate, control and dominate others. (We have seen this with every new technology: It can and will be used for good and bad.) Much will depend about how AI will be governed:
/p> If we have an inclusive and bottom-up governance system of well-informed citizens, then AI will be used for improving our quality of life. If only a few people decide about how AI is used and what for, many others will be dependent on the decisions of these few and risk being manipulated by them.
/p>The biggest danger in my view is that there will be a greater pressure on all members of our societies to live according to what ‘the system’ will tell us is ‘best for us’ to do and not to do, i.e., that we may lose the autonomy to decide ourselves how we want to live our lives, to choose diverse ways of doing things.
/p>With more and more ‘recommendations,’ ‘rankings’ and competition through social pressure and control, we may risk a loss of individual fundamental freedoms (including but not limited to the right to a private life) that we have fought for in the last decades and centuries.
/p>Peter Reiner, professor and co-founder of the National Core for Neuroethics at the University of British Columbia, commented, “I am confident that in 2030 both arms of this query will be true:
/p>AI-driven algorithms will substantially enhance our abilities as humans and human autonomy and agency will be diminished.
/p>Whether people will be better off than they are today is a separate question, and the answer depends to a substantial degree on how looming technological developments unfold.
/p>On the one hand, if corporate entities retain unbridled control over how AI-driven algorithms interact with humans, people will be less well off, as the loss of autonomy and agency will be largely to the benefit of the corporations.
/p> On the other hand, if ‘we the people’ demand that corporate entities deploy AI-algorithms in a manner that is sensitive to the issues of human autonomy and agency, then there is a real possibility for us to be better off – enhanced by the power of the AI-driven algorithm and yet not relegated to an impoverished seat at the decision-making table.
/p> One could even parse this further, anticipating that certain decisions can be comfortably left in the hands of the AI-driven algorithm, with other decisions either falling back on humans or arrived at through a combination of AI-driven algorithmic input and human decision making.
/p> If we approach these issues skillfully – and it will take quite a bit of collaborative work between ethicists and industry – we can have the best of both worlds. On the other hand, if we are lax in acting as watchdogs over industry we will be functionally rich and decisionally poor.”
/p>João Pedro Taveira, embedded systems researcher and smart grids architect for INOV INESC Inovação in Portugal, wrote, “Basically, we will lose several degrees of freedom.
/p>Are we ready for that? When we wake up to what is happening it might be too late to do anything about it. Artificial intelligence is a subject that must be studied philosophically, in open-minded, abstract and hypothetical ways.
/p>Using this perspective, the issues to be solved by humans are (but not limited to) AI, feelings, values, motivation, free will, solidarity, love and hate. Yes, we will have serious problems.
/p>Dropping the ‘artificial’ off AI, look at the concept of intelligence. As a computer-science person, I know that so-called ‘AI’ studies how an agent (a software program) increases its knowledge base using rules that are defined using pattern-recognition mechanisms.
/p>No matter which mechanisms are used to generate this rule set, the result will be always behavioral profiling.
/p>Right now, everybody uses and agrees to use a wide set of appliances, services and products without a full understanding of the information that is being shared with enterprises, companies and other parties. There’s a lack of needed regulation and audit mechanisms on who or what uses our information and how it is used and whether it is stored for future use.
/p>Governments and others will try to access this information using these tools by decree, arguing national security or administration efficiency improvements. Enterprises and companies might argue that these tools offer improvement of quality of service, but there’s no guarantee about individuals’ privacy, anonymity, individual security, intractability and so on.”
/p>David Bray, executive director of People-Centered Internet, commented, “Hope: Human-machine/AI collaborations extend our abilities of humans while we (humans) intentionally strive to preserve values of respect, dignity and agency of choice for individuals.
/p>Machines bring together different groups of people and communities and help us work and live together by reflecting on our own biases and helping us come to understand the plurality of different perspectives of others.
/p>Big concern: Human-machine/AI collaborations turn out to not benefit everyone, only a few, and result in a form of ‘indentured servitude’ or ‘neo-feudalism’ that is not people-centered and not uplifting of people.
/p> Machines amplify existing confirmation biases and other human characteristics resulting in sensationalist, emotion-ridden news and other communications that gets page views and ad-clicks yet lack nuance of understanding, resulting in tribalism and a devolution of open societies and pluralities to the detriment of the global human condition.”
/p>Bernie Hogan, senior research fellow at Oxford Internet Institute, wrote, “The current political and economic climate suggests that existing technology, especially machine learning, will be used to create better decisions for those in power while creating an ever more tedious morass of bureaucracy for the rest.
/p>We see little example of successful bottom-up technology, open source technology and hacktivism relative to the encroaching surveillance state and attention economy.”
/p>Dan Buehrer, a retired professor of computer science formerly with the National Chung Cheng University in Taiwan, warned, “Statistics will be replaced by individualized models, thus allowing control of all individuals by totalitarian states and, eventually, by socially intelligent machines.”
/p>Nathalie Marechal, doctoral candidate at the University of Southern California’s Annenberg School for Communication who researches the intersection of internet policy and human rights, said,
/p>“Absent rapid and decisive actions to rein in both government overreach and companies’ amoral quest for profit, technological developments
/p> – including AI – will bring about the infrastructure for total social control, threatening democracy and the right to individual self-determination.”
/p>Katja Grace, contributor to the AI Impacts research project and a research associate with the Machine Intelligence Research Institute, said, “There is a substantial chance that AI will leave everyone worse off, perhaps radically so.
/p>The chance is less than 50 percent, but the downside risk is so large that there could be an expectation the world might be worse for AI.”
/p>David A. Banks, an associate research analyst with the Social Science Research Council, said, “AI will be very useful to a small professional class but will be used to monitor and control everyone else.”
/p>Luis German Rodriguez Leal, teacher and researcher at the Universidad Central de Venezuela and consultant on technology for development, said, “Humankind is not addressing properly the issue of educating people about possibilities and risks of human-machine/AI collaboration.
/p>One can observe today the growing problems of ill-intentioned manipulation of information and technological resources. There are already plenty of examples about how decision-making is biased using big data, machine learning, privacy violations and social networks
/p> (just to mention a few elements) and one can see that the common citizen is unaware of how much of his/her will does not belong to him/her.
/p>This fact has a meaningful impact on our social, political, economic and private life. We are not doing enough to attend to this issue, and it is getting very late.”