August 19, 2021

Hybrid Intelligence

Research
Tutorial
Role
Demo
Use case
Product
Decision Augmentation
Academy

Introduction 

Research has a long history of discussing what is superior in predicting certain outcomes: statistical methods or the human brain. This debate has come up again and again due to the remarkable technological advances in the field of artificial intelligence (AI), such as solving tasks like object and speech recognition, achieving significant improvements in accuracy through deep-learning algorithms (Goodfellow et al. 2016) or combining various methods of computational intelligence, such as fuzzy logic, genetic algorithms, and case-based reasoning (Medsker 2012). One of the implicit promises that underlie these advancements is that machines will one day can perform complex tasks or may even supersede humans in performing these tasks. This heats up new debates of when machines will ultimately replace humans (McAfee and Brynjolfsson 2017). While previous research proves that AI performs well in some clearly defined tasks such as playing chess, Go or identifying objects on images, artificial general intelligence (AGI), however, which is able to solve multiple tasks at the same time, is doubted to be achieved in the near future (e.g. Russel and Norvig 2016). Moreover, using AI to solve complex business problems in organizational contexts occurs rather scarcely and applications for AI that solve complex problems remain mainly in laboratory settings than in the “wild.”

Since the road towards AGI is still a long one, I argue that the most likely paradigm for the division of labour between humans and machines in the next year or probably decades is, thus, Hybrid Intelligence. This concept aims at using the complementary strengths of human intelligence and AI, so they behave more intelligently than each of the two could separately (e.g. Kamar 2016). 

Conceptual Integration of Hybrid Intelligence 

Before focusing on Hybrid Intelligence in detail, I first want to delineate the concept from related but still different forms of intelligence in this context.

Intelligence

Various definitions and dimensions (e.g. social, logical, spatial, musical etc.) of the term intelligence exist in multiple research disciplines, such as psychology, cognitive science, neuro science, human behavior, education, or computer science. To my research, I use an inclusive and generic definition of describing general intelligence. It is the ability to accomplish complex goals, learn, reason, and adaptively perform effective actions within an environment. This can be subsumed with the capacity to both acquire and apply knowledge (Gottfredson 1997). While intelligence is most commonly used in the context of humans (and more recently intelligent artificial agents), it also applies to intelligent, goal-directed behavior of animals.

Human Intelligence

The sub-dimension of intelligence that is related to the human species defines the mental capabilities of human beings. On the most holistic level, it covers the capacity to learn, reason, and adaptively perform effective actions within an environment, based on their knowledge. This allows humans to adapt to changing environments and act towards achieving their goals.

While one assumption of intelligence is the existence of a so-called “g-factor”, which indicates a measure for general intelligence (Brand 1996), other research in the field of cognitive science explores intelligence in relation to the evolutionary experience of individuals. This means that, rather than having a general form of intelligence, humans become much more effective in solving problems that occur in the context of familiar situations (Wechsler 1964).

Another view on intelligence supposes that general human intelligence can be subdivided into specialized intelligence components, such as linguistic, logical-mathematical, musical, kinesthetics, spatial, social, or existential intelligence (Gardner 2000). 

Synthesizing those perspectives on human intelligence, Sternberg (1985) proposes three distinctive dimensions of intelligence: componential, contextual, and experiential. The componential dimension of intelligence refers to some kind of individual (general) skill set of humans. Experiential intelligence refers to one´s ability to learn and adapt through evolutionary experience. Finally, contextual intelligence defines the capacity of the mind to inductively understand and act in specific situations as well as the ability to make choices and modify those contexts.

Collective Intelligence

The second related concept is collective intelligence. According to Malone and Bernstein (2015:3), collective intelligence refers to “[…] groups of individuals acting collectively in ways that seem intelligent […]“. Even though the term “individuals” leaves room for interpretation, researchers in this domain usually refer to the concept of wisdom of crowds and, thus, a joint intelligence of individual human agents (Woolley et al. 2010). This concept describes that, in certain conditions, a group of average people can outperform any individual of the group or even a single expert (Leimeister 2010). Other well-known examples of collective intelligence are phenomena found in biology, where, for example, a school of fish swerves to increase protection against predators (Berdahl et al. 2013). These examples show that collective intelligence typically refers to large groups of homogenous individuals (i.e. humans or animals), while Hybrid Intelligence combines the complementary intelligence of heterogeneous agents (i.e. humans and AI). 

Artificial Intelligence

The subfield of intelligence that relates to machines is called artificial intelligence (AI). With this term, I mean systems that perform “[…] activities that I associate with human thinking, activities such as decision-making, problem solving, learning […]" (Bellman 1978:3). It covers the idea of creating machines that can accomplish complex goals. This includes facets, such as natural language processing, perceiving objects, storing of knowledge, applying this knowledge for solving problems, machine learning to adapt to new circumstances and act in its environment (Russel and Norvig 2016). Other definitions in this domain focus on AI as the field of research about the “[…] synthesis and analysis of computational agents that act intelligently […]"  (Poole and Mackworth 2017:3). Moreover, AI can be defined as having the general goal to replicate the human mind by defining it as “[…] the art of creating machines that perform functions that require intelligence when performed by people […]"  (Kurzweil 1990: 117). The performance of AI in achieving human-level intelligence can then be measured by, for instance, the Turing test. This task asks an AI program to simulate a human in a text-based conversation. Such capabilities can be seen as a sufficient but not necessary criterion for artificial general intelligence (Searle 1980). 

Synthesizing those various definitions in the field, AI includes elements such as the human-level ability to solve domain-independent problems, the capability to combine highly task-specialized and more generalized intelligence, the ability to learn from its environment and interaction with other intelligent systems, or human teachers, which allows intelligent agents to improve in problem solving through experience.

To create such a kind of AI in intelligent agents, various approaches exist that are more or less associated with the understanding and replication of intelligence. For instance, the field of cognitive computing

“[…] aims to develop a coherent, unified, universal mechanism inspired by the mind’s capabilities. […] I seek to implement a unified computational theory of the mind […]“  (Modha et al. 2011:60). Therefore, interdisciplinary researcher teams rely on the reverse-engineering of human learning to create machines that “[…] learn and think like people […]” (Lake et al. 2017:1).

The Complementary Benefits of Humans and AI 

The general rationale behind the idea of Hybrid Intelligence is that humans and computers have complementary capabilities that can be combined to augment each other. The tasks that can be easily done by artificial and human intelligence are quite divergent. This fact is known as Moravec´s paradox (1988:15), which describes the fact that

"[…] it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility […]."

This is especially represented by the human common sense that is challenging to achieve in AI (Lake et al. 2017).

This can be explained by the separation of two distinctive types of cognitive procedures (Kahneman 2011). The first, system 1, is fast, automatic, affective, emotional, stereotypic, subconscious and it capitalizes on what one might call human intuition. The second one, system 2 reasoning, is rather effortful, logical, and conscious and is ideally following strict rational rules of probability theory. In the context of complementary capabilities of human and AI, humans proved to be superior in various settings that need system 1 thinking. Humans are flexible, creative, empathic, and can adapt to various settings. This allows, for instance, human domain experts to deal with so called “broken-leg” predictions that deviate from the currently known probability distribution. However, they are restricted by bound rationality that prevents them from aggregating information perfectly and drawing conclusions from that. On the other hand, machines are particularly good at solving repetitive tasks that require fast processing of huge amounts of data, recognizing complex patterns, or weighing multiple factors following consistent rules of probability theory. This has been proven by a long-standing tradition of research that shows the superiority. Even in very simple actuarial models, they outperform human experts in making predictions under uncertainty (Meehl 1954).

Complementary Strengths of Humans and Machines

These complementary benefits of humans and machines has since led to the application of both, that is, AI is in the loop of human intelligence, which improves human decisions by providing predictions, and humans are in the loop of AI, which is frequently applied to train machine learning models.

AI in the Loop of Human Intelligence

Currently, in typical business contexts, AI is applied in two areas. First, they are used in automating tasks that can be solved by machines alone. While this is often associated with the fear of machines taking over jobs and making humans obsolete in the future, it might also allow humans to solve tasks they do not want to do. Second, AI is applied to provide humans with decision support by offering predictions. This ranges from structuring data, making forecasts, for example, in financial markets, or even predicting the best set of hyperparameters to train new machine learning models (e.g. AutoML). As humans often act non-Bayesian by violating probabilistic rule and thus making inconsistent decisions, AI has proven to be a valuable tool to help humans in making better decisions (Agrawal et al. 2018). The goal in this context is to improve human decision effectiveness and efficiency. 

For instance, in settings where AI provides the human with input that is then evaluated to decide, humans and machines act as teammates. For instance, AI can help human physicians by processing patient data (e.g. CT scans) to make predictions on diseases such as cancer, empowering the doctor to learn from the added guidance. In this context, the Hybrid Intelligence approach allows human experts to use the predictive power of AI while using their own intuition and empathy to make a choice from the predictions of the AI. 

The Human in the Loop of AI

On the other hand, human intelligence also has a crucial role in the loop of machine learning and AI. Humans aid in several parts of the machine learning process to support AI in tasks that it cannot (yet) solve alone. Here, humans are most commonly used for the generation of algorithms (e.g. hyperparameter setting/tuning), training or debugging models and making sense of unsupervised approaches such as data clustering.

AI systems can help and learn from human input. This approach allows to integrate human domain knowledge in the AI to design, complement and evaluate the capabilities of AI (Mnih et al. 2015). Many of these applications are based on supervised and interactive learning approaches and need an enormous amount of labelled data, provided by humans (Amershi et al. 2014). The basic rational behind this approach is that humans act as teachers who train an AI. The same machine teaching approach can also be found in the area of reinforcement learning that used, for instance, human game play as input to initially train robots. In this context, human intelligence functions as a teacher, augmenting the AI. Hybrid Intelligence allows to distribute computational tasks to human intelligence on demand (e.g. through crowdsourcing) to minimize shortcomings of current AI systems. Such human-in-the-loop approaches are particularly valuable when only little data is available to date, pre-trained models need to be adapted for specific domains, the stakes are high, there is a high level of class imbalance, or in contexts where human annotations are already used. 

 
Distribution of Roles in Hybrid Intelligence

As humans in the loop of AI am most frequently applied in settings where models are initially set up or in the field of research, the goal is to make AI more effective.

Defining Hybrid Intelligence

One other approach beyond trying to replicate human level intelligence and related learning mechanisms is to combine human and artificial intelligence. The basic rational behind this is the combination of complementary heterogeneous intelligences (i.e. human and artificial agents) into a socio-technological ensemble that is able to overcome the current limitations of artificial intelligence. This approach is neither focusing on the human in the loop of AI nor automating simple tasks through machine learning but on solving complex problems using the deliberate allocation of tasks among different heterogeneous algorithmic and human agents. Both the human and the artificial agents of such systems can then co-evolve by learning and achieve a superior outcome on the system level. 

I call this concept Hybrid Intelligence, which is defined as “[…] the ability to accomplish complex goals by combining human and artificial intelligence to collectively achieve superior results than each of them could have done in separation and continuously improve by learning from each other […]” (Dellermann et al. 2019:3).  Several core concepts of this definition are noteworthy:

  1. Collectively: Hybrid Intelligence covers the fact that tasks are performed collectively. Consequently, activities conducted by each agent are conditionally dependent. However, their goals are not necessarily always aligned to achieve the common goal such as when humans are teaching an AI adversarial tasks in playing games.
  2. Superior results: defines the idiosyncratic fact that the socio-technical system achieves a performance in a specific task that none of the involved agents, whether they are human or artificial, could have achieved without the other. The aim is, therefore, to make the outcome (e.g. a prediction) both more efficient and effective on the level of the socio-technical system by achieving goals that could not have been solved before. This contrasts Hybrid Intelligence with the most common applications of human-in-the-loop machine learning. 
  3. Continuous learning: a central aspect of Hybrid Intelligence is that over time this socio-technological system improves both as a whole and as each single component (i.e. humans and machine agents). This facet defines that they learn from each other through experience. The performance of Hybrid Intelligence systems can, thus, not only be isolated, measured by the superior outcome of the whole socio-technical system, but also by the learning (i.e. performance increase) of human and machine agents that are parts of the system.

One recent example that provides an astonishing indicator for the potential of Hybrid Intelligence is DeepMind ´s AlphaGo. For training the game -playing AI, a supervised learning approach was used that learned from expert human moves and, thus, augmented the AI through human input, which allowed AlphaGo to achieve superhuman performance over time. During its games against various human world-class players, AlphaGo played several highly inventive moves that previously were beyond human players´ imagination. Consequently, AlphaGo was able to augment human intelligence as well and somehow taught expert players completely new knowledge in a game that is one of the longest studied in human history (Silver et al. 2015).

“I believe players more or less have all been affected by Professor Alpha. AlphaGo’s play makes us feel freer and no move is impossible to play anymore. Now everyone is trying to play in a style that hasn’t been tried before.” – Zhou Ruiyang, 9 Dan Professional, World Champion

Solving problems through Hybrid Intelligence offers the possibility to allocate a task between humans and intelligent agents and deliberately achieve a superior outcome on the socio-technical system level by aggregating the output of its parts. Moreover, such systems can improve over time by learning from each other through various mechanisms, such as labelling, demonstrating, teaching adversarial moves, criticizing, rewarding and so on. This will allow us to augment both the human mind and the AI and extend applications when men and machines can learn from each other in much more complex tasks than games: for instance, strategic decision-making, managerial, political, or military decisions, science and even AI development leading to AI reproducing itself in the future. Hybrid Intelligence, therefore, offers the opportunity to achieve super-human levels of performance in tasks that so far seem to be the core of human intellect.

The Advantages of Hybrid Intelligence

This hybrid approach provides various advantages for humans in the era of AI such as generating new knowledge in complex domains that allow humans to learn from AI and transfer implicit knowledge from experienced experts to novices without some kind of social interaction. On the other hand, the human teaching approach allows to control the learning process by ensuring that the AI makes inferences based on humanly interpretable criteria – a fact that is crucial for AI adoption in many real-world applications and AI safety and that allows to exclude biases such as racism and so forth (Bostrom 2017). Moreover, such hybrid approaches might allow for a better customization of AI, based on learning the preferences of humans during interaction. Finally, I argue that the co-creation of Hybrid Intelligence services between humans and intelligent agents might create a sense of psychological ownership and, thus, increasing acceptance and trust. 

Future Research Directions in the Field of Hybrid Intelligence

As technological advances further, the focus of machine learning and Hybrid Intelligence is shifting towards applications in real-world business contexts, solving complex problems will become the next frontier. Such complex problems in managerial settings are typically time variant, dynamic, require much domain knowledge and have no specific ground truth. These highly uncertain contexts require intuitive and analytic abilities and further human strengths such as creativity and empathy. Consequently, I propose three specifics but also interrelated directions for further development of the concept that are focused on socio-technical system design.

A core requirement for integrating human input into an AI system is interaction design. For instance, semi-autonomous driving requires the AI to sense the human state to distribute tasks between itself and the human driver. Furthermore, it requires examining human-centred AI architectures that balance, for instance, transparency of the underlying model and its performance, or create trust among users. However, domain specific design guidelines for developing user-interfaces that allow humans to understand and process the needs of an artificial system are still missing. I, therefore, believe that more research is needed to develop suitable human-AI interfaces as well as to investigate possible task and interface designs that allow human helpers to teach an AI system (e.g. Simard et al. 2017). Interpretability and transparency of machine learning models while maintaining accuracy is one of the most crucial challenges in research on Hybrid Intelligence. This was most recently covered by the launch of the People + AI Research (PAIR) group at Google. 

Second, research in the field of Hybrid Intelligence might investigate how mechanisms of traditional crowdsourcing strategies can be used to train and maintain Hybrid Intelligence systems. Such tasks frequently require domain expertise (e.g. health care) and, thus, crowdsourcing needs to focus on explicitly matching experts with tasks, aggregating their input and assuring quality standards. I, therefore, argue that it might be a fruitful area of research to further investigate how current forms of crowdsourcing and platforms should evolve. Moreover, human teachers may have different motivations to contribute to the system. Consequently, research in the field tries to shed light on the question of how to design the best incentive structure for a predefined task. Especially, when highly educated and skilled experts are required to augment AI systems, the question arises if traditional incentives of micro-tasking platforms (e.g. monetary reward) or online communities (e.g. social rewards) are sufficient. 

A third avenue for future research is related to digital work. The rise of AI is now changing the capabilities of IS and the potential distribution of tasks between human and IS dramatically and, hence, affects the core of my discipline. Those changes create novel qualification demands and skill sets from employees and, consequently, provide promising directions for IS education. Such research might examine the educational requirements for democratizing the use of AI in future workspaces. Finally, Hybrid Intelligence also offers great possibilities for novel forms of digital work such as internal crowd work to leverage the collective knowledge of individual experts that resides within a company across functional silos. 

See references

The original paper is published in Business & Information Systems Engineering