Reading Reflections of “Questioning the AI: Informing Design Practices for Explainable AI User Experiences”

The concept of Explainable AI was only made aware to me last year as I attended a lab discussion about user experience in AI. I was inspired by the discussion and I read several articles about explainable AI. The paper “Questioning the AI: Informing Design Practices for Explainable AI User Experiences” gives me a deeper understanding of XAI and what does future work looks like regarding this field.

The paper focuses on insights into the design space of XAI, design practices in this space, and opportunities for future XAI work. I am impressed by the extended XAI question bank and the perspectives from data scientist side instead of the user end side. The authors mentioned in the article that “We then asked informants to reflect on what questions users would ask about the AI and listed as many as they could. User questions were also added to MURAL by the researchers if they appeared in other parts of the discussion”. I am really into this idea because getting data scientists and UX experts involved not only provides a professional but also provides a user-friendly formatted question list. The main problem of explainable AI is how to fill the gap between XAI algorithms and user needs for effective transparency. Because AI is a technical conception in my opinion, if you want to explain AI, it is inevitable to involve technique or algorithmic explanations. But if we only gather questions from the user side, it is possible that we can only get biased questions because of the limited knowledge domains of users. People only know what they don’t know based on what they know, so I think it is helpful to get data scientists and UX experts involved. However, this also is one of my concerns about this method. Data scientists and UX experts also have their own limited knowledge domains, which means the questions they list are biased to some extent. People ask questions based on their current question domains. If their knowledge domain gets aboard, then the current XAI systems will become unexplainable AI. To develop XAI, we must learn and predict how the knowledge domain changes from time to time.

I am not surprised that the paper mentioned “The most frequently asked questions were not regarding descriptive information of the algorithmic output, but at a high level, inquiring how to best utilize the output”. I, as an AI tool user, have the same feeling as well. People always want to learn more from the output to benefit the next level of utilization because the current output is not the end of the journey. If a recommendation system recommends a financial product for me based on my purchased history and my financial status, information about how I best utilize the recommendations is much more important than explaining how the recommendation system works and get the information for me. Obviously, this engages more areas and more works other than artificial intelligence. But I think this is a trend in developing explainable AI.

Another perspective I get from the article is that any single XAI cannot be a perfect one for everyone because different people have different knowledge domains, different business goals and different understandings. So it’s unfair and hard to have only one evaluation criteria for XAI. How do we evaluate an explainable AI system is also an important subject to develop along with the explainable AI. We want to fill the gap between XAI algorithms and user needs for effective transparency, but if users don’t know how to evaluate the system, how to compare the output, then the system is incomplete.