August 19, 2021

Designing Hybrid Intelligence in Your Organisation

Research
Tutorial
Role
Demo
Use case
Product
Decision Augmentation
Academy

HIx- Hybrid Intelligence Xperience Design is how we define our approach to develop intelligent systems that combine human and artificial intelligence. Many people think building AI solutions is solely a technical challenge with data scientists sitting in front of their screen and crunching huge amounts of data. But that is only half the story and one of the main reasons why many AI applications fail in corporate settings. Designing hybrid intelligence systems is all about the user and her interactions with an intelligent agent. That’s why we think it is time for a new era of developing intelligent software applications where UX design meets AI software engineering and business expertise.

Developing AI systems that humans need to collaborate with is challenging. From day one system engineers and designers have to understand not only the technical and the user side but also gain a deep understanding of the problem domain where the system should be applied. That requires a novel user and AI-centric approach of ideation for designing new solutions.

The HIx Canvas

For helping companies that are interested in developing hybrid intelligence systems for their organizations or software engineers and UX designers that are interested in this topic, we developed the HIx Design Canvas, to share our approach of developing intelligent systems at vencortex. This is a highly iterative and experimental process that requires you to design solutions, test it both technical and with users, and learn from the process. Similar to the Lean Startup Approach teaches you to develop your MVP.

The canvas is based on previous research which we published recently on a leading conference on computer science. The HIx Canvas is a teaser for our upcoming book and code repository release on hybrid intelligence systems.

Any questions or feedback? Feel free to contact us.

HIx Canvas

Business Task

Developing hybrid intelligence systems allows creating superior results through a collaboration between humans and machines. The central component that drives design decisions for hybrid intelligence systems is the job that humans and machines solve collaboratively and how work is distributed between both. Defining those jobs requires a combination of problem domain understanding, UX design, and machine learning expertise.

Task

The task to be solved is the first dimension that has to be defined for developing hybrid intelligence systems. This task can be one of five generic categories: recognition, prediction, reasoning, action, and alignment. Recognition defines tasks that recognize for instance objects, images, or natural language. On an application level, such tasks are used for autonomous driving or smart assistants such as Alexa, Siri or Duplex. Prediction tasks aim at predicting future events based on previous data such as stock prices or market dynamics. The third type of task, reasoning, focuses on understanding data by for instance inductively building (mental) models of a certain phenomenon and therefore make it possible to solve complex problems with a small amount of data. Action tasks are characterized as such that requires an agent to conduct a certain kind of action such as autonomous movements. Finally, alignment can be seen as a way to ensure that an AI agent acts accordingly with human desires, norms, and values and is a central point of AI safety.

Goals

Although the term goal might be misleading from an epistemological perspective, humans and an AI system may have a common ”goal” like solving a problem through the combination of the knowledge and abilities of both. An example of such common goals is recommender systems (e.g. Netflix), which learn a user’s decision model to offer suggestions. In other contexts, the agents' goals can also be adversarial. For instance, in settings where AIs try to beat humans in games such as IBMs Watson in the game of Jeopardy! or DeepMinds AlphaZero and AlphaStar. In many other cases, the goal of the human and the AI may also be independent for example when humans train image classifiers without being involved in the end solution.

Data Representation

The shared data representation is how data is shown to both the human and the machine for executing their tasks. This is a non-trivial design decision for instance image data (like an image of a cat) is easily represented to humans but needs to be transferred to numerical data for AI. On the other hand, using a huge amount of unstructured data can be easily accessed by an algorithm but needs to be visualized in some way to make it understandable and accessible for users. The data can be represented in different levels of granularity and abstraction to create a shared understanding between humans and machines. Features describe phenomena in different kinds of dimensions like the height and weight of a human being. Instances are examples of phenomena that are specified by features. Concepts, on the other hand, are multiple instances that belong to one common theme, e.g. pictures of different humans. Schemas finally illustrate relations between different concepts.

Timing

Timing defines the point in the lifecycle of a system where the collaboration happens. For instance, human feature engineering allows the integration of domain knowledge in machine learning models. While more recent advances make it possible to fully automatically (i.e. machine only) learn features through deep learning, human input can be combined for creating and enlarging features such as the case of artist identification on images and quality classification of Wikipedia articles. In the next step in the lifecycle, parameter tuning can be applied to optimize models. Here machine learning experts typically use their deep understanding of statistical models to tune hyper-parameters or select models. Such human-only parameter tuning can be augmented with approaches such as AutoMLor neural architecture search automated the design of machine learning models, thus, making it much more accessible for non-experts. Moreover, human input is crucial for training machine learning models in many domains. For instance, large datasets such as ImageNet or lung cancer dataset LUNA16 rely on human annotations. For instance, recommender systems heavily rely on the input of human usage behavior to adapt to specific preferences and robotic applications are trained by human examples.

Learning Paradigm

Augmentation

The augmentation of human intelligence is focused on typical applications that enable humans to solve tasks through the predictions of an algorithm such as in financial forecasting or solving complex problems like in expert augmentation systems. Contrary, a lot of applications in machine learning so far focus on leveraging human input for training to augment machines. Finally, more recent work identified the great potential for simultaneously augmenting both at the same time through hybrid augmentation. For an example of Alpha Go that started by learning from human game moves (i.e. machine augmentation) and finally offered hybrid augmentation by inventing creative moves that taught even mature players novel strategies.

Machine Learning

The machine learning paradigm that is applied in hybrid intelligence systems heavily influences the overall system design and should, therefore, be understood by the design team. Frequently, many paradigms are combined at the same time. In supervised learning, the goal is to learn a function that maps the input data x to a certain output data y, given a labeled set of input-output pairs. In unsupervised learning, such output y does not exist and the learner tries to identify a pattern in the input data x. Further forms of learning such as reinforcement learning or semi-supervised learning can be subsumed under those two paradigms. Semi-supervised learning describes a combination of both paradigms, which uses both a small set of labeled and a large set of unlabelled data to solve a certain task. Finally, reinforcement learning. An agent interacts with an environment thereby learning to solve a problem through receiving rewards and punishment for a certain action.

Human Learning

Moreover, hybrid intelligence system and proper HIX Design offers great potential for human learning as well. Humans have a mental model of their environment, which gets updated through events. This update is done by finding an explanation for the event. Human learning can, therefore, can be achieved from experience and comparison with previous experiences and descriptions and explanations.

User Experience

User experience defines how the human user interacts with the intelligent agent while using the system and therefore requires a decent understanding of user psychology and user preferences.

Machine Teaching

This design decision defines how humans provide input. Humans can demonstrate actions that the machine learns to imitate. On the other hand, humans can annotate data for training a model for instance through crowdsourcing, which is called labeling. Human intelligence can also be used to actively identify misspecification of the learner and debug the model, which we define as troubleshooting. Moreover, human teaching can take the form of verification whereby humans verify or falsify machine output.

Teaching Interaction

The input provided through human teaching can be both explicit and implicit. While explicit teaching leverages active input of the user such for instance labeling tasks such as image or text annotation, implicit teaching learns from observing the actions of the user and thus adapts to their demands. For instance, Microsoft uses contextual bandit algorithms to suggest users certain content, using the actions of the user as implicit teaching interaction.

Expertise requirements

Hybrid intelligence systems can have certain requirements for the expertise of humans that provides input for systems. While by now both most research and practical applications focus on human input from an ML expert, thus, requiring deep expertise in the field of AI. Moreover, end users can provide the system with input for product recommendations and e-commerce or input from human non-experts accessed through crowd work platforms. More recent endeavors, however, focus on the integration of domain experts in hybrid intelligence architectures that leverage the profound understanding of the semantics of a problem domain to teach a machine, while not requiring any ML expertise.

Incentives

Humans need to be incentivized to provide input in hybrid intelligence systems. Incentives can be monetary rewards such as the case of crowd work on platforms (e.g. Amazon Mechanical Turk) or payments as part of organizational job descriptions, intrinsic rewards such as intellectual exchange in citizen science, fun in games with a purpose, or learning. Another incentive for human input is customization, which allows increasing individualized service quality for users that provide a higher amount of input to the learner. The most interesting incentive structure for humans that we found is skin-in-the-game. This approach heavily focuses on letting the user participate in the success of the end-product. We at Vencortex think this is the most promising way for incentivizing humans to collaborate with AI.

Active Learning Design

Active Learning Design defines how the intelligent agent interacts with the user and therefore requires a decent understanding of data science and software engineering.

Query Strategy

Query strategy defines then approach of how and when an AI asks for human input. Offline query strategies require the user to finish her task before her actions are applied as input to the AI. Handling a typical labeling task the human would first need to go through all the data and label each instance. Afterward, the labeled data is fed to a machine-learning algorithm to train a model. In contrast, online query strategies let the human complete subtasks who are directly fed to an algorithm, so that teaching and learning can be executed almost simultaneously. Another possibility is the use of active learning query strategies. In this case, the human is queried by the machine when more input to give an accurate prediction is required.

AI Feedback

AI feedback describes how the intelligent agent interacts with the user to provide her input. Users can get direct suggestions from the machine, which makes explicit recommendations to the user on how to act. For instance, recommender systems such as Netflix or Spotify provide such suggestions for users. Predictions as a form of AI feedback can support users, for instance, to detect lies, or identify cancer nodes on CT scans. The third form of AI feedback is structuring data. Thereby, machines compare data points and put them in order, for instance, to prioritize items, or organize data among identified pattern. Furthermore, another possibility of feedback is optimization. The intelligent agents enhance and nudge users for instance in making more consistent decisions by optimizing their strategy.

Interpretability

For AIH interaction in hybrid intelligence systems, interpretability is crucial to prevent biases (e.g. racism), achieve reliability and robustness, ensure causality of the learning, debugging the learner if necessary and for creating trust especially in the context of AI safety. We highly recommend this book for gaining a deeper understanding of the topic. Interpretability can be achieved through algorithm transparency, that allows opening the black box of an algorithm itself, global model interpretability that focuses on the general interpretability of a machine learning model, and local prediction interpretability that tries to make more complex models interpretable for a single prediction. Finally, case-based interpretability allows an algorithm to show the user archetypes of cases that lead to a certain outcome and allow then use to apply analogical reasoning on such cases.

References

Dellermann, D.; Calma, A.; Lipusch, N.; Weber, T.; Weigel, S. & Ebel, P. (2019): The Future of Human-AI Collaboration: A Taxonomy of Design Knowledge for Hybrid Intelligence Systems. In: Hawaii International Conference on System Sciences (HICSS). Hawaii, USA.

Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid intelligence. Business & Information Systems Engineering, 61(5), 637-643.