Quantcast
Channel: Category Name
Viewing all articles
Browse latest Browse all 5971

Mind Bytes: Solving Societal Challenges with Artificial Intelligence

$
0
0

By Francesca Lazzeri (@frlazzeri), Data Scientist at Microsoft

Artificial​ ​intelligence​ ​(AI)​ solutions​ ​are playing a growing role in our everyday life, ​​and​ ​are​ ​being adopted​ ​broadly, in private and public domains. ​ ​While​ ​the​ ​notion​ ​of​ ​AI​ ​has​ ​been around​ ​for​ ​over​ ​sixty​ ​years, real-world​ ​AI scenarios and applications​ ​have​ ​only​ ​increased​ ​in​ ​the​ ​last​ ​decade​ ​due​ ​to​ ​three​ ​simultaneous events: ​ ​​improved​ ​computing​ ​power, capability​ ​to​ ​capture​ ​and​ ​store​ ​massive​ ​amounts​ ​of​ ​​​data, and faster​ ​algorithms.  

AI solutions help determine the commercials you see online, the movie you will watch with your family, the routes you may take to get to work. Beyond​ ​the​ most popular apps, ​these​ ​systems​ ​are​ ​also​ ​being​ ​implemented​ ​in​ ​critical areas​ such as​ health care, immigration policy, ​finance, ​and​ the ​workplace. The design​ ​and​ ​implementation​ ​of​ ​these AI ​tools​ ​presents​ ​deep societal​ ​challenges​ ​​that​ ​will​ ​shape​ ​our​ ​present​ ​and near​ ​future. ​ ​ ​

To identify and contribute to the current dialog around the emerging societal challenges that AI is bringing, we attended Mind Bytes at the University of Chicago. Mind Bytes is an annual research computing symposium and exposition designed to showcase to more than 200 attendees cutting-edge research and applications in the field of AI. Some of the most interesting demos and posters presented were:

  • An Online GIS Platform to Support China’s National Park System establishment – Manyi Wang
  • 17 Years in Chicago Neighborhoods: Mapping Crime Trajectories in 801 Census Tracts – Liang CAI
  • Characterizing the Ultrastructural Determinants of Biophotonic Reflectivity in Cephalopod Skin: A Challenge for 3D Segmentation – Stephen Senft, Teodora Szasz, Hakizumwami B. Runesha, Roger T. Hanlon
  • Exploring Spatial Distribution of Risk Factors for Teen Pregnancy – Emily Orenstein and Iris Mire

The Mind Bytes panel on Solving Societal Challenges with Artificial Intelligence represented an incredible occasion for us to interact with students and many other AI experts from the field, and understand how we can work together to ensure that AI is developed in a responsible manner, so that people will trust it and deploy it broadly, both to increase business and personal productivity and to help solve societal problems.

Specifically, the panel focused on three fundamental areas of discussion related to current and future AI aspects:

  • What areas do you see AI most successfully applied?
  • What is the major challenge that you think should be met before getting the full benefit of AI?
  • What can researchers and students do now to build a system able to address those challenges?

The following sections aim at answering these questions in more detail and reflect on the latest academic and industry research. AI is already with us, and we are now faced with important choices on how it will be designed and applied. Most promisingly, the approaches observed at Mind Bytes demonstrate that there is growing interest in developing AI that is attuned to underlying issues of fairness and equality.

What areas do you see AI most successfully applied?

Today’s AI allows faster and deeper progress in every field of human endeavor, and it is crucial to enabling the digital transformation that is at the heart of global economic development. Every aspect of a business or organization, from engaging with customers to transforming products and services, optimizing operations and empowering employees, can benefit from this digital transformation.

AI has also the potential to help society overcome some of its most challenging issues such as reducing poverty, improving education, delivering healthcare and eradicating rare diseases.

Another field where AI can have a significant positive impact is in serving the more than 1 billion people in the world with disabilities. One example of how AI can make a difference is a Microsoft app called Seeing AI, that can assist people with blindness and low vision as they navigate daily life.  Seeing AI was developed by a team that included a Microsoft engineer who lost his sight at 7 years of age. This powerful app proves the potential for AI to empower people with disabilities by collecting images from the user’s surroundings and describing what is happening around them.  

What is the major challenge that you think should be met before getting the full benefit of AI?

As AI begins to augment human understanding and decision-making in fields like education, healthcare, transportation, agriculture, energy and manufacturing, it will increase the need to solve one of the most crucial societal challenges nowadays: advancing inclusion in our society. 

The​ ​threat​ ​of​ ​bias​ ​rises​ ​when​ AI​ ​systems​ ​are​ ​applied ​to​ ​critical​ ​societal areas ​like​​ healthcare and education.​ While​ ​all​ ​possible​ ​consequences ​of​ ​such​ ​biases​ ​are​ ​worrying,​ ​finding pragmatic solutions​ ​can be a very complex process.​ ​Biased​ ​AI​ ​can​ ​be the result of​​ ​many different ​factors,​ ​for example what​ ​goals​ ​AI​ ​developers​ ​have​ ​in​ ​mind during​ ​development ​and​ ​whether​ ​the​ ​​​systems​ ​developed are representative enough of ​different​ ​parts​ ​of​ ​the​ ​population. ​ ​

Most importantly, AI​ ​solutions​ ​learn ​from​ ​training​ ​data. ​Training​ ​data​ ​can​ ​be imperfect, skewed, often​ ​drawing​ ​on​ ​incomplete​​ ​samples​ ​that​ ​are​ ​poorly​ ​defined​ ​before​ ​use. ​​Additionally, ​because of necessary ​labelling and feature engineering processes, ​human biases​ ​and​ ​cultural​ ​assumptions​ ​can also be​ ​transmitted​ ​by​ ​classification​ ​choices. ​All these technical challenges can result in the​ ​exclusion​ ​of​ ​sub-populations​ ​from​ ​what​ ​AI​ ​is​ ​able​ ​to​ ​see and​ learn from. ​

Data​ ​is​ ​also very expensive, ​and​ ​data​ ​at scale​ ​is​ ​hard​ ​to​ ​collect and use.​ ​Most of the time, data scientists​ ​who​ ​want​ ​to​ ​train​ ​a model​ end up ​using easily available​ ​data, ​often​ ​crowd-sourced,​ ​scraped,​ ​or​ ​otherwise​ ​gathered​ ​from existing​ ​​apps​ ​and​ ​websites.​ ​This​ ​type​ ​of​ ​data​ ​can​ ​simply​ advantage socioeconomically​ ​privileged​ ​populations,​ who have a faster and easier ​access​ ​to​ ​connected​ ​devices and​ ​online​ ​services.​

What can researchers and students do now to build a system able to address those challenges?

We believe that researchers and students must work together to ensure that AI-based technologies are designed and deployed in a way that will earn the trust of the users who use them and whose data is being collected to build those AI solutions. It is vital for the future of our society to design AI to be reliable and create solutions that reflect ethical values that are deeply rooted in important and timeless principles.

For example, when AI systems provide guidance on medical treatment, loan applications or employment, they should make the same recommendations for everyone with similar symptoms, financial circumstances or professional qualifications. The design of any AI system starts with the choice of training data, which is the first place where unfairness can arise. Training data should sufficiently represent the world in which we live, or at least the part of the world where the AI system will operate.

Students should develop analytical techniques to detect and address potential unfairness. We believe the following three steps will support the creation and utilization of healthy AI solutions:

  • Systematic evaluation of the quality and fitness of the data and models used to train and operate AI based products and services.
  • Involvement of domain experts in the design and operation of AI systems used to make substantial decisions about people.
  • A robust feedback mechanism so that users can easily report performance issues they encounter.

Finally, we believe that, when AI applications are used to suggest actions and decisions that will impact other lives, it is important that affected populations understand how those decisions were made, and that AI developers who design and deploy those solutions become accountable for how they operate.

These standards are critical to addressing the societal impacts of AI and building trust as the technology becomes more and more a part of the products and services that people use at work and at home every day.


Viewing all articles
Browse latest Browse all 5971

Trending Articles