Press "Enter" to skip to content

6 Challenges – Identified by Scientists – That Humans Face With Artificial Intelligence

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and act like humans. AI technologies enable computers to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.

A study led by a professor from the University of Central Florida has identified six challenges that must be overcome in order to improve our relationship with artificial intelligence (AI) and guarantee its ethical and fair utilization.

A professor from the University of Central Florida and 26 other scientists have published a study highlighting the obstacles that humanity must tackle to guarantee that artificial intelligence (AI) is dependable, secure, trustworthy, and aligned with human values.

The study was published in the International Journal of Human-Computer Interaction.

Ozlem Garibay, an assistant professor in UCF’s Department of Industrial Engineering and Management Systems, served as the lead researcher for the study. According to Garibay, while AI technology has become increasingly prevalent in various aspects of our lives, it has also introduced a multitude of challenges that need to be thoroughly examined.

For instance, the coming widespread integration of artificial intelligence could significantly impact human life in ways that are not yet fully understood, says Garibay, who works on AI applications in material and drug design and discovery, and how AI impacts social systems.

The six challenges Garibay and the team of researchers identified are:

  • Challenge 1, Human Well-Being: AI should be able to discover the implementation opportunities for it to benefit humans’ well-being. It should also be considerate to support the user’s well-being when interacting with AI.
  • Challenge 2, Responsible: Responsible AI refers to the concept of prioritizing human and societal well-being across the AI lifecycle. This ensures that the potential benefits of AI are leveraged in a manner that aligns with human values and priorities, while also mitigating the risk of unintended consequences or ethical breaches.
  • Challenge 3, Privacy: The collection, use, and dissemination of data in AI systems should be carefully considered to ensure the protection of individuals’ privacy and prevent the harmful use against individuals or groups.
  • Challenge 4, Design: Human-centered design principles for AI systems should use a framework that can inform practitioners. This framework would distinguish between AI with extremely low risk, AI with no special measures needed, AI with extremely high risks, and AI that should not be allowed.
  • Challenge 5, Governance and Oversight: A governance framework that considers the entire AI lifecycle from conception to development to deployment is needed.
  • Challenge 6, Human-AI interaction: To foster an ethical and equitable relationship between humans and AI systems, it is imperative that interactions be predicated upon the fundamental principle of respecting the cognitive capacities of humans. Specifically, humans must maintain complete control over and responsibility for the behavior and outcomes of AI systems.

The study, which was conducted over 20 months, comprises the views of 26 international experts who have diverse backgrounds in AI technology.

“These challenges call for the creation of human-centered artificial intelligence technologies that prioritize ethicality, fairness, and the enhancement of human well-being,” Garibay says. “The challenges urge the adoption of a human-centered approach that includes responsible design, privacy protection, adherence to human-centered design principles, appropriate governance and oversight, and respectful interaction with human cognitive capacities.”

Overall, these challenges are a call to action for the scientific community to develop and implement artificial intelligence technologies that prioritize and benefit humanity, she says.

Reference: “Six Human-Centered Artificial Intelligence Grand Challenges” by Ozlem Ozmen Garibay, Brent Winslow, Salvatore Andolina, Margherita Antona, Anja Bodenschatz, Constantinos Coursaris, Gregory Falco, Stephen M. Fiore, Ivan Garibay, Keri Grieman, John C. Havens, Marina Jirotka, Hernisa Kacorri, Waldemar Karwowski, Joe Kider, Joseph Konstan, Sean Koon, Monica Lopez-Gonzalez, Iliana Maifeld-Caruccig, Sean McGregor, Gavriel Salvendy, Ben Shneiderman, Constantine Stephanidis, Christina Strobel, Carolyn Ten Holter and Wei Xu, 2 January 2023, International Journal of Human-Computer Interaction.
DOI: 10.1080/10447318.2022.2153320

The group of 26 experts includes National Academy of Engineering members and researchers from North America, Europe, and Asia who have broad experiences across academia, industry, and government. The group also has diverse educational backgrounds in areas ranging from computer science and engineering to psychology and medicine.

Their work also will be featured in a chapter in the book, Human-Computer Interaction: Foundations, Methods, Technologies, and Applications.

Source: SciTechDaily