Microsoft has released the Human-AI eXperience (HAX) Toolkit, a set of practical tools to help teams strategically create and responsibly implement best practices when creating artificial intelligence technologies that interact with people.
The toolkit comes as AI-infused products and services, such as virtual assistants, route planners, autocomplete, recommendations and reminders, are becoming increasingly popular and useful for many people. But these applications have the potential to do things that aren’t helpful, like misunderstand a voice command or misinterpret an image. In some cases, AI systems can demonstrate disruptive behaviors or even cause harm.
Such negative outcomes are one reason AI developers have pushed for responsible AI guidance. Supporting responsible practices has traditionally focused on improving algorithms and models, but there is a critical need to also make responsible AI resources accessible to the practitioners who design the applications people use. The HAX Toolkit provides practical tools that translate human-AI interaction knowledge into actionable guidance.
“Human-centeredness is really all about ensuring that what we build and how we build it begins and ends with people in mind,” said Saleema Amershi, senior principal research manager at Microsoft Research. “We started the HAX Toolkit to help AI creators take this approach when building AI technologies.”
The toolkit currently consists of four components designed to assist teams throughout the user design process from planning to testing:
The Guidelines for Human-AI Interaction provide best practices for how AI applications should interact with people. The HAX Workbook helps teams prioritize guidelines and plan the time and resources needed to address high-priority items. The HAX Design Patterns offer flexible solutions for addressing common problems that come up when designing human-AI systems. The HAX Design Library is a searchable database of the design patterns and implementation examples. Teams can utilize the HAX Playbook to identify and plan for unforeseen errors, such as a transcription error or false positive. Humans collaborating to build better AI
The idea
This article is purposely trimmed, please visit the source to read the full article.
The post New toolkit aims to help teams create responsible human-AI experiences appeared first on Microsoft | The AI Blog.