Faculty, staff and students from the Department of Physics in the College of Arts and Sciences are using their free time to brush up on their sewing skills and help check the spread of COVID-19. Members of the department are…
ISchool Professor Lee McKnight Contributes to Pew Research Report on Future of Artificial Intelligence
School of Information Studies (iSchool) Associate Professor Lee McKnight has contributed his opinions on the changes coming to the artificial intelligence (AI) field in a recently published Pew Research Center report titled “Artificial Intelligence and the Future of Humans.”
Published last week as a joint effort by the Pew Research Center and the Imagining the Internet Center at Elon University, the report is the result of a large-scale canvassing of technology experts, scholars, corporate and public practitioners and other leaders, where they were prompted to share their answer to the following query:
“Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked AI in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties.
“Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030. Please consider giving an example of how a typical human-machine interaction will look and feel in a specific area, for instance, in the workplace, in family life, in a health care setting or in a learning environment. Why? What is your hope or fear? What actions might be taken to assure the best future?”
In addressing the prompt, McKnight said, “There will be good, bad and ugly outcomes from human-machine interaction in artificially intelligent systems, services and enterprises.… Poorly designed artificially intelligent services and enterprises will have unintended societal consequences, hopefully not catastrophic, but sure to damage people and infrastructure. Even more regrettably, defending ourselves against evil—or to be polite, bad AI systems turned ugly by humans, or other machines—must become a priority for societies well before 2030, given the clear and present danger. How can I be sure? What are bots and malware doing every day, today? Is there a reason to think ‘evil-doers’ will be less motivated in the future? No. So my fear is that the hopefully sunny future of AI, which in aggregate we may assume will be a net positive for all of us, will be marred by—many—unfortunate events.”
McKnight’s concerns about the pitfalls of AI progression address some of the key themes noted in the report, which sought the participation of 979 technology leaders. “The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities,” according to the report’s introduction. About 63 percent of report respondents believed that humans would will be mostly better off as a result of the growing impact of AI, while about 37 percent said people will not be better off.
Some of the concerns noted by report respondents include data abuse, job loss and dependence lock-in, while, on the positive side, experts see opportunities for new work and life efficiencies, health care improvements and advances in education.