August 31, 2021

Deskilling, upskilling, and reskilling: a case for hybrid intelligence

Research
Tutorial
Role
Demo
Use case
Product
Decision Augmentation
Academy

Introduction

Over the past two centuries, technological and organizational innovations have repeatedly brought immense changes to the labor market. Major transformations include the industrial revolution, the introduction of the assembly line followed by automation and robotics and finally a digital transformation starting with the advent of personal computers and in recent years with the deployment of systems involving some form of Artificial Intelligence (AI). Here, we will first examine labor market changes through the lens of deskilling and then dive deeper into this concept with respect to AI. 

Although contemporary understanding of deskilling has multiple interpretations, generally it is considered to be the loss of professional skills due to technological or work practice changes.  Examples of such skills include decision-making and judgement skills lost due to work management (Davis, 2008) as well as psychomotor and cognitive skills (Ferris et al., 2010). The term was conceived during industrialization, as capitalist modes of production transformed tasks that previously required years of training into simple, routinized tasks. This change reduced costs through hiring cheaper, less skilled labor but also resulted in the deconstruction of craftsmanship by separating the conceptual and skilled part of work from execution and devaluing the worker (Sutton 2018). In this process, workers’ knowledge and skills are relocated into machinery by management, deskilling workers and often reducing immediate costs (Braverman, 1974). For instance, in the 19th and the first half of the 20th century, the general shift from small-scale artisan production to factory work such as the introduction of the mechanical loom led to the replacement of highly-skilled artisans by unskilled factory workers (Brugger & Gehrke, 2018).  In this process, the creation of fabrics was standardized and the fabric design and material choice became centrally dictated. This separation of production and design led to fundamental changes to the nature of the work and simplification of tasks: unskilled people could accomplish the tasks, but were unable to design and produce fabric without the factory system support, an example of deskilling. In parallel, industrialization combined with organizational optimization brought about the assembly line paradigm, which both revolutionized traditional areas like meat-production (Nibert, 2011) and enabled mass production of modern equipment such as the automobile. The macro-economic trend during this period was the replacement of the minority group of highly skilled workers with a rapidly expanding un- or low-skilled urban and industrial workforce. An important caveat of these early examples of sectoral deskilling has been that a few skilled employees remain to manage, or design and construct machinery that replaces other parts of the labor process. These highly trained employees would form the seed of the increasingly dominant knowledge workers of the 21st century. 

Historically, shifts in the workplace often occur in tandem with technological development.  In the second half of the 20th century, the introduction of production line robotics as well as general advances in machine automation created massive job replacement and an increasing divide between a decreasing fraction of low-skilled jobs dedicated to operating the machinery and high-skilled, adaptable jobs responsible for interaction with and defining the roles of the machinery. In parallel, digital advances, primarily revolving around the introduction of the workplace computer, have started a fundamental transformation of the landscape of work by the large-scale emergence of the knowledge worker in distinct areas such as finance (Dilla & Stone, 1997; Sutton, 1993; Sutton, 2018?) and medicine (Rinard, 1996, Hoff, 2011). An example of the transition can be seen in word-processing, where the word processing specialist in the 1980s was replaced by word processing software. Using such software became a common skill, integrated in elementary schools in the 1990s. This trend has continued in the 2000s with increasing degrees of workplace digitization and since the mid-2010s, the world has been at the brink of an overwhelming job market transition as deep learning technologies (LeCun et al., 2015) are finally starting to fulfill some long-standing promises of AI. In particular, using algorithms trained on large amounts of data, AI is now able to perform more complex tasks (McAfee and Brynjolfsson, 2017) and thus increasingly enters the domain of knowledge workers (Frey & Osborne, 2013). 

Technological induced deskilling can be understood from an individual perspective; However, this effect is rarely isolated, and includes implications for organizations and society (Stone et al., 2007). Together, these three perspectives provide a holistic approach for researchers and corporate strategists to consider deskilling, upskilling and reskilling needs in hybrid intelligencece design and application of AI in digital transformation.  

At the society level, deskilling is continually framed as an economic issue for governments and institutions. This has repeatedly fuelled concern about technologically induced unemployment, in the sense of permanent reduction in the active labor force. Despite these fears and the immense sectorial workforce changes discussed above, we have not been experiencing a trend of growing unemployment rates. In particular,  i.a. Feldman (2013) found the macroeconomic effects of technologically induced unemployment to be temporary, with work moving from areas replaced by technology to producers of that technology within three years time. Thus, historically, the introduction of new technologies has resulted in an increasing knowledge level in the general workforce, despite temporary periods of job loss and deskilling. However, recently both researchers (Harari, 2017; Makridakis, 2017; Pol & Revely, 2017; Korinek and Stiglitz, 2019) and intergovernmental agencies (Council of Europe, 2019; Schwab & Davis, 2018) have pointed out the possibility that the large-scale deployment of AI-based technologies may indeed defy historical trends and introduce permanent macroeconomic unemployment. While the potential long term consequences for humanity are hotly debated, there is a growing consensus that development and application of AI will indeed cause radical changes to the job market as a whole and that all occupations will be affected in one way or the other (Chrisinger, 2019). In short, the current consensus is that everyone will have to adapt their work skills in response to AI implementations.

Although market forces seem to facilitate a rather fast bounce-back on average, there may of course be large variance in the resilience of different societies and nations. Maximizing benefits and minimizing negative implications from technological advances therefore remains a permanent center point of legislative attention. As an example, to address this, Kim, Kim and Lee (2017) find that “legal and social limitations on computerization are key to ensuring an economically viable future for humanity.” Additionally, the ongoing and expected changes to the job market have naturally spurred parallel intense discussions of the reframing of the educational system to bridge the future “skills gap” (Chrisinger, 2019; Jerald, 2009; Tuomi, 2018).  Although the societal effects of deskilling are important and much work needs to be done to understand the implications fully, in the remaining part of this perspective piece, we choose to focus on deskilling from the individual and organizational perspective. 

At the individual level, technological changes often bring about dramatic shifts to the skills and abilities necessary to navigate both our personal and professional lives. Along the lines of the former, Wilmer et al. (2017) describe the impact of using smartphones and related mobile technologies, where “habitual involvement with these devices may have a negative and lasting impact on users’ ability to think, remember, pay attention, and regulate emotion.” Similarly, human users of search engines lose their mental capability to store information, even though they simultaneously develop the ability to store how to find that information. In what is known as the Google-effect, memory capability is reallocated from storing facts to storing search strategies (Sparrow, Liu, and Wegner, 2011). As discussed below, deskilling at the individual level can happen in organizations when digital technologies and robotics are applied without reflection as to their effects on the workforce and human roles in the organization. 

These types of deskilling can result in an over-reliance on technological assistance. One framework of particular relevance to organizational deskilling in the age of AI is that of technological dominance (Sutton 1998, Sutton and Arnold, 2018) because technology can increasingly be understood as an actor in the work process. This theory focuses on systems made for assisting professionals in their decision process, and explains the risk of over-reliance on technology assistance by professionals (Ferris et al., 2010), in which the user loses or fails to develop skills due to accepting being in a subservient role compared to the technology.  For example, Hoff (2011) found that doctors’ use of decision-assisting technology affected their skills differently, depending on their personal practices when working with the technology. Technological dominance is largely driven by over-dependence, which largely depends on i) the user’s experience level, ii) problem complexity, iii) user’s familiarity with the systems and iv) the user’s cognitive fit with the system’s underlying decision processes (Sutton 2018). 

In their research, (Hoff, 2011) observed unintended deskilling to often occur when the goal of the technology was to reduce costs and increase productivity by minimizing human input where possible and automating work. This process raises the pertinent question, how to structure human-AI interactions in corporate settings. The required insights can, however, only be achieved if the skills of the future are thoroughly understood and how they come into play in the concrete interaction between AI-driven systems and humans. To investigate this, we revisit the concept of deskilling through the lens of a particular type of human-AI interaction, Hybrid Intelligence. We do this in order to:

a) convey the importance of being aware of the potential effects of deskilling through AI related to risks of overdependence, technological dominance and lack of sustainability in framing AI/Human roles and interactions. 

b) begin examining how hybrid intelligence interactions can be designed to avoid deskilling and instead promote upskilling 

c) spark dialogues among the research disciplines of Human Computer Interaction (HCI), computer science, information system (IS), learning and cognitive science as well as policymakers and the private sector. 

d) provide an augmentation perspective for directing the influence of AI technology on human workers and their jobs - shifting the focus from automation to augmentation

Deskilling, upskilling and reskilling in the age of AI 

A Frame for Awareness in Types of Human/AI interactions 

Similar to how humans during the Industrialization were relegated to the role of ensuring that machinery runs smoothly, AI is increasingly able to perform more complex tasks with a human either in-the-loop (HITL) (Bisen, 2020; Monarch, 2021) for instance helping plan, execute or evaluate a data acquisition effort or human-on-the-loop (HOTL) simply checking the final outcome (Nahavandi, 2017) and in some cases even human-out-of-the-loop (HOOTL) completely (Steelberg, 2019). Consider for a moment an AI designed for medical diagnostics. If the AI works alongside a human, but requires human interaction throughout the process, it is considered HITL. In cases where the AI conducts the diagnostic process on its own and only requires a yes or no from a human at the end, it is HOTL. If the AI completes the whole process without needing confirmation from a human, then it is considered HOOTL (Endsley & Kiris, 1995). In this paper, we introduce a special form of HITL, Hybrid Intelligence, which pursues optimal synergies of the human and AI system (Dellermann et al., 2019; Akata et al., 2020).

Figure 1: An output view:  An illustration of different relationships between human and machine intelligent systems.

We note that this subdivision can appear computer-centric by denominating efforts by the role of the human alongside the technology. One may rightfully argue for a more human-centric view, in which the human appears to be more in control and the computer is “on the loop”. MacKay (1999) provides an example of this in her discussion of developing computer systems to support the human centric system of air traffic control.  Here, the focus is on understanding human interactions in order to find ways of integrating systems that support it.  From a human process perspective, systems can be understood from three major paradigms (Beaudouin-Lafon, 2004)-- computer-as-tool, computer-as-partner; and computer-as-medium. This human-centered design perspective for hybrid intelligence reveals tensions between paradigms for development that can affect whether AI systems lead to deskilling versus re-skilling. In future work, we will provide a more granular specification scheme that also incorporates whether the human or the algorithm can be thought of as the primary driving factor. For now, however we stick with the established computer-centric notation and specify that in the considerations introduced below HITL should not be taken to imply that the computer is in control of the process, but rather that the human is more involved in the details of the task performance than is the case in HOTL.

A growing body of literature is emerging devoted to deskilling in the age of AI (e.g., Brynjolfsson, & McAfee 2014; Trösterer et al., 2016; Sutton et al., 2018). This work has, however, focused primarily on concrete use cases and less on connecting these to a broader framework for the type of AI involved, such as the one we have introduced above.  The main contribution of this work is to provide a strong call for developers, those implementing technology and the decision makers to consciously consider which form of human-computer interaction is needed/being implemented and to be increasingly aware of the unique risks of deskilling in each scenario.

To reflect on deskilling in relation to this framework, we take a point of departure in theory related to knowledge and skills development. This theory provides a basis for aligning discussions about knowledge as it relates both to professions and employment, as well as developer uses of the term knowledge to describe the technological capabilities of AI.  

A Basis for How To Develop AI : Knowledge, Skills, and their Effects on Work

Knowledge and work are intertwined, with work experience and organization affecting ways in which employees build knowledge in changing and highly uncertain environments. To consider the significance of knowledge for framing an approach to deskilling, upskilling, and reskilling in the age of AI, we look at knowledge from two perspectives--first the nature of knowledge, and then types of knowledge.  

The Nature of Knowledge:

The nature of knowledge can frame our understanding of the connection between skills and work by examining definitions of knowledge in contrast to information that is generally stored in knowledge repositories. A key point relates to how knowledge is used in work. McDermott (1999) presented seminal considerations of the nature of knowledge and the importance of knowledge management in organizations. Here he presents knowing as “a human act, whereas information is an object that can be filed, stored, and moved around. Knowledge is a product of thinking…” and the “…ability to use that information.” Within AI, appropriately representing knowledge in a digital form was considered the primary challenge required to generate human level intelligence for decades. However this led to applying rule-based approaches that resulted in an ever-growing prescriptive list without encoding insights. Thus, AI systems were incapable of capturing human common sense both in physical (humans’ natural understanding of the physical world) and social (humans’ innate ability to reason about people’s behavior and intentions) domains, and it remains an unsolved challenge to date (Marcus, 2020). This failure led to the most recent ‘AI winter’ that lasted until the advent of deep learning in the past half decade (Nicholson, 2018). Despite all its successes, it is important to note that deep learning systems do not represent knowledge in the sense defined above but only statistical inferences, and therefore it “thinks” much less like humans than it may appear on the surface. It is possible that the latest neuroscience insights into how humans use a nested series of reference frames in so-called cortical columns may lead in the foreseeable future to forms of AI more closely resembling human cognition (Hawkins, 2021). However, until then, we will have to design human-AI interfaces with highly asymmetric forms of cognition and modes of learning. We will return to this fundamental challenge below as we discuss the concept of hybrid intelligence, but first we return to the topic of technologically induced deskilling from a lens of types of knowledge. 

Types of Knowledge:

Technologically induced changes in knowledge work can be understood as related to shifts in types of knowledge required to accomplish the work and ways in which these types of knowledge are distributed between humans and technology.  This is approached from  a number of different frameworks (Arnold and Sutton, 1998; Barnard and Harrison, 1992, Venkatesh et al. 2003).  To focus on types of professional and organizational knowledge such as procedural and domain knowledge, we consider five key factors. First, the task domain, which can range from motoric/craftsmanship (e.g. robotics) to the cognitive (e.g., decision making support, knowledge management systems) andthe empathetic/caregiving (e.g., health technologies) domains. Next, the task characteristics, which Davenport and Kirby (2016) grouped as analyzing numbers, analyzing words and images, performing digital tasks and finally performing physical tasks. Thirdly, associated work procedures can be classified as skill-based behavior (automatic behavior in familiar situations, high efficiency), rule-based behavior (reasonably well known environment, ok efficiency) and knowledge based behavior (KBB, novel/abnormal tasks, slow) following Bhardwaj’s classifications (2013). In general, procedural knowledge is expected to persist, whereas the importance of descriptive knowledge will decline (Trösterer 2016). As discussed above, the external effects range from replacing highly skilled workers with less skilled workers (restructuring the workforce) to causing deskilling at the individual level (degrading the overall proficiency level of the workforce) or eventual complete automation (diminishing the workforce) (Sutton et al., 2018). The numerous internal/personal effects include over-reliance on algorithms, decrease in professional involvement, dulling of professional decision-making skills and inability to make quality unaided decisions (Mascha & Smedley, 2007, Sutton et al., 2018). Not surprisingly, in HITL cases where humans are merely the source of information to automate their job, this reduces the skills and expertise of human experts and might also create adoption barriers due to psychological resistance (Parente & Prescott, 1994; Pachidi et al. 2021) or a lack of trust and accountability (Dellermann 2020).

Considerations for How to manage Deskilling, Upskilling and Re-Skilling with Hybrid Intelligence 

When considering relations between knowledge and skills, design and implementation of AI should be considered through deskilling, upskilling and reskilling effects. These effects matter because there are organizational and societal risks involved regarding knowledge domains, procedures and knowledge based actions. These can shift from organizations to other contexts or possibly be lost altogether. 

In contrast to deskilling, upskilling is a consequence of increasing automation or technological change where workers build new, often broader and higher level skills which are required from the workforce (Peng et al., 2018). Technology can facilitate upskilling in different ways including: (i) freeing up resources, so humans can use their expertise for further innovation, working on more cognitively demanding or rewarding tasks that computers cannot solve (Bresnahan et al., 2002) and (ii) introducing new demands and facilitating the acquisition of new skills that are necessary after the way of working has been transformed (Spenner, 1983).  In the 21st century, the skills fostered by technological advancement are mainly high-level cognitive, analytical and less routine-cognitive and non-routine manual skills (Peng et al., 2018). Orellana 2015 suggests that deskilling and upskilling are not independent, but rather interact, resulting in reskilling, where decreased knowledge of a task is compensated by increased knowledge of the problem-solving system itself (Rinta-Kahila et al., 2018). Additionally reskilling is considered to be an active process whereas deskilling and upskilling are considered consequences of increased automation or technical changes (ref).

This highlights that changes in workforce capabilities are not necessarily negative but that positive outcomes can be achieved if the technology is deliberately planned and implemented with not only the particular human and algorithmic process in mind but also the synergistic interactions. We thus briefly review previous work on three well-studied areas of AI-implementation and skill changes—medical diagnostics (Levy et al., 2019), finance (Mascha and Smedley, 2007) and autonomous driving (Trösterer et al., 2016) —before introducing the Hybrid Intelligence framework as a means of further tapping into the potential of human-AI interactions. 

Medicine 

In medicine, new technology and deep learning are becoming more influential in fields such as  ophthalmology (Levy et al., 2019), radiology (Hosny et al., 2018), molecular medicine (Altman, 1999) and pediatric care (Buoy Health, 2018). Without AI technologies, physicians determine a diagnosis and treatment plan based on physical examinations, relying on manual perception and cognitive skills combined with professional skills (Hosny et al., 2018). As instances of automatic screening and decision-aid systems based on deep learning become more widespread, so do concerns that physicians will lose skills over months and years, and become prone to making decisions based solely on AI recommendations. Loss of clinical skills includes decreased ability to derive informed opinions on the basis of available data, increased stereotyping of patients, inaccuracy in identifying pathologies, decreased clinical knowledge and examination skills (or even failure to perform one) and decreased confidence in their own decisions (Levy et al., 2019). Issues of the role of management and the individual doctor’s choices in the way they use the technology have been demonstrated to be key drivers of these risks (Hoff, 2011).

Finance

In finance, accountants, and auditors perform most of their tasks manually by going through large amounts of data to reach a conclusion (Mascha and Smedley, 2007). AI automates many of these mundane tasks by assisting in selecting information through automated data analysis and machine learning, while decision-support systems help sequence decision processes and provide decision recommendations, particularly for routine tasks. This results in decreased domain-specific knowledge, as the main task of the user is narrowed down to judging the system's recommendations. Other outcomes are declines in non-routine, or potentially routine decision-making skills (Mascha and Smedley, 2007), less knowledge acquisition (Noga and Arnold, 2002) and decreased levels of creativity (Wortmann, Fischer, and Reinecke 2015). 

Self-Driving Cars 

Finally, there’s the case of self-driving cars. In traditional driving, the vehicle is controlled by a human, often aided by advanced driver-assistance systems (e.g., cruise control), some of which are considered standard today, such as electronic stability control (ESC) and anti-lock braking systems (ABS). Driving this way requires a set of perceptual-motor and safety skills: navigation, planning, the ability to anticipate and dynamically adjust to the environment, knowledge of traffic rules, hazard perception and appropriate vehicle maneuvering (Trösterer et al., 2016). With increasing levels of automation (NHTSA, 2013), the driver can cede varying levels of vehicle operation control to technological systems (Cabitza ez al., 2017). As a result, the skill levels needed to drive a car decrease, while at Level 4 automation, no human input is required (Coroamă & Pargman, 2020), since the human's role shifts from active engagement to mere monitoring (Sarter et al., 1997). This raises the potential for loss of manual driving skills that can create highly dangerous situations, where the human driver is not paying attention and is suddenly asked to intervene at precisely the moment when the problem is too complex for the intelligent system.

Counteracting deskilling: the individual perspective

To counteract potential deskilling effects presented above, three components have previously been identified. The first is education, which means identifying a set of fundamental skills of the field that should not be allowed to be deskilled and keeping them a part of teaching regardless of technological advancement (Levy et al., 2019). Examples include systems thinking and analogical reasoning (Mascha & Smedley, 2007) in finance, the ability to perform full physical examinations and identify pathologies in medicine (Levy et al., 2019), and technical driving skills for operating self-driving vehicles (Trösterer et al., 2016). Second, professionals need to take an active role by relying on their own decisions first and foremost, only checking AI recommendations afterwards (Levy et al., 2019). Conceptual understanding of what algorithms are doing is also necessary on the human side (Sutton et al., 2018). Finally, the relationship between humans and AI needs to focus on collaboration rather than competition. This includes AI systems continuously providing explanations to users of their decisions for education (Mascha and Smedley, 2007), and determining the relative strength of humans and machines to design systems that effectively take advantage of both (Peng et al., 2018, Sutton et al., 2018). Additionally, the cognitive science literature on cognitive overload suggests that varying the level of feedback provided to decision-aid users might moderate the risks that result from under-reliance (Mascha & Smedley, 2007).

Previous research on AI and human collaboration has emphasized that matching human and AI skills is critical (Mascha & Smedley, 2007), and that lack of learning on the human side can lead  to lack of agency on the human side. A lack of mutual learning between humans and AI creates a performance gap in the potential outcome of such a sociotechnical system. Especially in uncertain, complex and dynamic environments that require constant updating of the knowledge about the world, it is crucial to empower humans to extend their knowledge and improve their skill set. To increase human levels of expertise, apart from updating their mental model of the world with new information, this requires knowledge workers to also improve their meta-cognitive abilities, in order to better reflect on, and potentially act on, the information provided to them by the machine during this collaboration. For example, in complex environments without access to ground truth or with highly time-delayed feedback, expert performance has been showed to correlate strongly with meta-cognitive abilities such as consistency and discrimination (Weis & Shanteau, 2003). If knowledge workers are not empowered to learn and improve, their input to the system can be systematically flawed, thus decreasing the ability of the overall system to improve.

Counteracting deskilling: the organizational perspective

So far, we have primarily discussed the considerations of AI-induced deskilling at the individual level in terms of concrete changes to the efficiency of the concrete task solution. A more general issue is, how to structure a technology innovation process within an organization in order to avoid detrimental side effects such as deskilling. This has been studied for decades in the field of IS research and in the following section we will briefly review some organizational considerations arising from the introduction of Ai-technologies in the workplace environment. 

From an organizational perspective, deskilling can be seen as a socio-techniocal phenomenon that encompasses organizational process, user choices and wider level labor politics.  Bhardwaj (2013) focuses on deskilling problems in terms of technology-organization-process interactions.  Here, he points out that explicitly routinizing expert knowledge through systems is a management strategy.  He calls for counteracting deskilling through rethinking skills in the maritime industry with high levels of technology as related to three levels of ergonomics-- physical, cognitive and organizational design for management.  To counteract deskilling for workers,  training for these three levels is recommended.  These strategies can counteract risks of deskilling seen in situations where automation proves detrimental through errors induced by improper use of technology.  Thus, from an organizational perspective, counteracting deskilling can begin with a holistic understanding of technology--organization--process interactions. Hoff (2011) shows that deskilling with technology can happen as a result of how users choose to interact with the system.  In the field of medicine, he found that doctors’ choices when working with systems that enabled more managerial control over their work led to deskilling outcomes.  This finding underscores the managerial role in system design that offers potential for deskilling. Brugger and Gherke discuss economic frames for deskilling in the 18th, 19th and 20th centuries.  One relevant frame casts deskilling in terms of managerial choices, where new technologies were introduced not to save money, but to reduce the bargaining power of skilled workers.  In addition, they describe technology as bringing skill transformation, connecting it to shifts in the labour market resulting in shifts in society.  

The case for hybrid intelligence

As previously stated, the pursuit of effective human-AI interfaces has recently received increased attention as evidence mounts that pure deep learning systems will not in themselves deliver human level intelligence in complex scenarios (Hawkins, 2021; Heaven, 2019; Marcus, 2020). Many scholars refer to Hybrid Intelligence as the solution but loosely define it to encompass all of HITL-AI (Akata et al., 2020; Kamar, 2016; Lasecki, 2019; Prakash and Mathewson, 2020; Sinagra, Rossi, & Raimondo  (2021)) , which greatly diminish its value as a concrete, actionable design framework. Lyttinen et al., (2020) define the stronger concept of meta-human systems, characterized by both human and machine learning, and discuss both the implications for organizations and the open challenges. In terms of learning, they distinguish between trial and error learning and diffusion-based learning. Here, we follow the related but somewhat more stringently defined three-step definition of hybrid intelligence in (Dellermann et al. 2019a, Dellermann 2020): “the ability to achieve complex goals by combining human and artificial intelligence, thereby reaching superior results to those each of them could have accomplished separately, and continuously improve by learning from each other” (Dellermann et al. 2019a).

In this paradigm, collectiveness means that knowledge workers and AI agents solve tasks together to achieve a system level goal, while each individual part's sub-goal might be different from the overall system-level goal. The second part of the definition specifies that HI allows us to solve problems that lie beyond the capabilities of individual human and computational agents. Finally, mutual learning –between knowledge workers and AI–allows constant improvement of capabilities over time (Dellermann et al. 2019b). 

Hybrid intelligence could be conceived of as a critical approach to the space in-between these configurations. In Hybrid intelligence, the configurations of human and system are reflected and chosen deliberately with concern towards the human needs and analysis of potential effects of the system with regard to these human needs.  As seen in table 1, Hybrid Intelligence mandates a comprehensive human-centered focus, and as such can be configured  to maximize opportunities for upskilling, and minimizing the risk for deskilling; we will focus on this category for the rest of the paper.  

From an organizational perspective, the design of AI affects the individual experience by affecting the shape of work processes. If we consider the design of AI as part of a larger work design, it can support a critical perspective for breaking down work that is shared between humans and AI algorithms. This can be applied in ways that maximize opportunities for upskilling.  It can be seen as offering options for configuring use and design with respect to improving working conditions at the individual level.  One useful concept for approaching Hybrid Intelligence for human and machine learners is scaffolding  (Quintana et al., 2004). Scaffolding can be conceptualized as a way to design processes and tools that help hybrid Intelligence teams perform complex tasks (Collins, Brown, & Newman, 1989) that either could not otherwise accomplish. Scaffolding as a design lens can be applied to both human and AI learners working together to accomplish business processes which change over time.  Quintana et al. identify three ways in which scaffolding can happen: through helping the learner make sense of, and attend to important parts or features of the task; guiding learners throughout the process of solving the task; or by prompting the learner to articulate and reflection on the task in-action. Because it is knowledge work, people are constantly working just beyond their capabilities, therefore scaffolding  can inform HI in the labor market. However, this requires continual awareness of the reflexive relationship between the design of tasks and tools because Hybrid Intelligence can be seen as encompassing both together.  If we take this scaffolding perspective, research needs to focus on how organization organize tasks and tools with hybrid intelligence in mind.  

Appropriate and ideal uses for how human and AI work is configured in organizations can be developed based on key needs in the context coming from corporate strategy, the education level and needs of the workers and educational systems.  The key idea driving Hybrid Intelligence at the individual level focuses on how individuals working together with AI systems are treated, empowered, and educated.  This reflection at the individual level raises some questions for future research for HI: how do we design work processes and tasks and accompanying HI systems so that both AI and humans can leverage the practice of scaffolding?  How can configurations of human and AI use scaffolding to upskill workers? Second, scaffolding implies the active use of complex skills, from being able to read and do basic arithmetic to creatively reflecting on data. This raises another crucial question for future research in HI: what are the skills that people need to work collaboratively with Machine Learning in Hybrid Intelligence systems?

Also from an organizational perspective, Hybrid Intelligence builds off of Socio-technical systems theory where the “social and technical systems need to work together, so system design should consider how to appropriately coordinate human and AI skills, knowledge and talents in contrast to other methods, where the technical component is designed first, and humans are only fitted into it after” (Appelbaum, 1997). Bednar and Welch (2020) explore Socio-technical system theory in so-called ‘smart systems’ which harness the Internet-of-Things, AI, and robotics used in organizations, finding that “utilization of disruptive advanced technologies requires consideration from multiple perspectives taking into account the longer-term as well as the potential short term gain.” Automation “at the expense of expertise seems a short-sighted solution” (Sutton et al 2018) and can have unintended consequences in the long run.  However, following established models of individual and organizational responses to change (Elrod and Tippett 2002), in most realistic scenarios, upskilling only occurs with a conscious effort and often leads to initial productivity decreases which are then counteracted by substantial long-term gains to leverage the full potential of the technology. However, Hybrid Intelligence can be used for many purposes at the organizational level, and whether upskilling is in focus depends on the organizational strategy and culture.  Key points in leveraging Hybrid Intelligence for organizations include a point of departure in 1) configuring and coordinating human and AI roles in business processes, and 2) building short and long term perspectives into the Hybrid Intelligence interaction design, particularly around designing work for upskilling in alignment with organizational strategy and culture. 

Here, we imagine that a new technology is introduced into a company and ideally provides a constant increase that leads  to a linearly increasing accumulated metric. As the technology is embedded into the work culture, unexpected adverse effects may appear due to workforce deskilling. The case of the Boeing 737 Max can serve as an example. An IEEE article described it as an accident waiting to happen because the automated MCAS (Maneuvering Characteristics Augmentation System) was designed without much consideration about how it would interact with pilots.  For example, display elements for the MCAS part of the system were optional, and the MCAS would reset itself after interaction with the pilot, so that the design limits in how much the system could affect the planes height were not enforced when this interaction occurred (Christiansen, 2019).  In addition, the software was designed to address an issue in the business case for the new design, resulting in a misrepresentation of the 737 Boeing Max as requiring the same skill set as earlier 737s.  This business case focus resulted in a lack of documentation for pilots, as well as considerations for the interaction between pilots, censors, and the autopilot systems. (Travis, 2019) This case offers an example of up- and re-skilling for incorporating AI relevant to an organization across different domains.  For example, upskilling is required of software designers to understand potential interactions between humans and AI systems and their effects in the future.  In addition,  re-skilling is required of management to be more aware of how AI can affect products through unintended consequences of Human/AI interaction.  Designing for Hybrid Intelligence offers a way to address these issues.  Additional issues explained by Travis include reskilling of pilots to ensure they are aware of how autopilot systems are designed and have the ability to disable them when necessary.  Thus, upskilling also extends beyond the organization to users.  This example demonstrates the nuance and complexity that Hybrid Intelligence can address through closer connections between design and use that focus not on systems in isolation, but systems working together with humans. A key question at the organizational level for future research would be, “What then are key issues that hinder implementation of human-centered AI systems in organizational contexts? We here posit three potential reasons: 

  1. The highly publicized success of AI in well-contained tasks with respect to human performance may lead to unrealistic expectations about the capabilities of off-the-shelf AI solutions.
  2. Being fully digital, AI-solutions lend themselves well to the definition of data-driven performance metrics. This may lead to a series of (incremental) implementation stages, where success according to quantitative metrics is undeniable whereas less weight is given to either qualitative or quantifiable human considerations in the decision process, which may result in short-term productivity gains but highly uncertain and potentially unintended long-term consequences.
  3. The design of human-AI interaction is an extremely new field without many well-documented best practice cases. Many times those developments are limited to laboratory settings and not well adapted for industrial applications. Development is at present dominated by a small set of technology experts within and outside the organizations with esoteric knowledge of the inner workings of state-of-the-art AI. They may not always possess the psychological, user experience, organizational understanding or business knowledge skills necessary to develop human-centered solutions and most parts of organisations often lack a sufficiently deep understanding of AI to formulate convincingly their experience-based concerns or fully contribute with their competences.

We suggest that standardized frameworks for integrating IS systems at the organizational level such as the IT engagement model (Fonstad and Robertson, 2006) be revisited in light of the developing nature of AI technology and Hybrid Intelligence, to see whether their IT governance, Project Management and linking mechanisms can adequately account for the increasing technical complexity of AI-based solutions as well as the human focus inherent in Hybrid Intelligence solutions.  Here, we claim that the Hybrid Intelligence framework can 1) inspire best practices for human-AI synergies and 2) provide tangible development criteria. 

At the organizational level, a key task in developing and maintaining Hybrid intelligence systems is to consider strong and natural support for the human’s i) meta cognitive skills ii) systems thinking (Bednar and Welch 2020, Sutton et al 2018) iii) complex problem solving iv) creativity  (Dellermann et al., 2019; Trunk et al., 2020; Wang et al., 2021) v) tacit knowledge , and vi) analogical reasoning (Sutton et al 2018). The first four are not domain-specific skills but rather complex, contextual and nuanced skills that are also highlighted as some of the most important skills for the 21st century (OECD, 2018; Soffel, 2016). The last two, tacit knowledge and analogical reasoning, contain some implicit domain knowledge, gained through experience, which is difficult to transfer from one person to another. Other skills will be important too, and further investigation into knowledge work skills that can be developed through Hybrid Intelligence systems should be a topic of study in the future.

At the society level, AI has engendered fear due to the history of deskilling and upskilling in the past 200 years, where technology and work have evolved together. Skillsets and job categories have been changed radically in the past, with new work categories, jobs, skill sets, and education evolving to replace the old. With AI, the media has created a fear that humans may lose control of their systems, their skill sets, and their jobs through deskilling.  However, we need to look at the organizational level to understand how and why AI is being incorporated into organizational strategies and business processes, and whether the deskilling risk is indeed one of replacement of human labor or simply change as usual.  Key concepts on the society level relate to the shape of industries and the interaction between industries of production and those of experience.  Future questions include 1) the issue of whether a 4-day work week might be needed to reshape how work and leisure is configured at the society level when Hybrid Intelligence systems become more widespread across organizations and industries; and 2) how will institutions such as unions, educational systems and professional organizations be affected by a widespread use of Hybrid Intelligence? 

Conclusions and recommendations 

Unlike previous technology, AI is increasingly able to perform more complex tasks.  This leads to fear on the individual, organizational and society level that risks associated with deskilling may have negative effects on our quality of work and life.  One fear is that AI technologies may potentially replace knowledge workers in many professions, leading to mass unemployment of skilled labor, which in turn can have negative economic effects. As with many technologies, AI will continue to make drastic changes in the job market.  However, the deskilling and upskilling conversation around the potential for thinking in a Hybrid Intelligence framework, rather than a technology driven framework matters at the individual, organizational and society level.  This is because, if not properly managed and mitigated, these changes may lead to deskilling of human workers.  This, in turn, could result in  negative long term consequences for  organizations that may outweigh short  gains in efficiency when they focus on AI as a replacement for human workers.  However, the development of AI technologies, when approached from our Hybrid Intelligence framework also presents a rich opportunity.   Hybrid Intelligence foregrounds human context and needs and encourages exploring new forms of interaction and synergies between humans and algorithms for reskilling and upskilling  workers. As such, Hybrid intelligence also provides a valuable lens for analyzing AI-induced deskilling, to mitigate its effects as well as scaffold and upskill users. 

The purpose of the discussions in this paper is to provide an initial exploration of the topic and should be seen as a call for both further theoretical and empirical studies with interdisciplinary perspectives from cognitive science, management, computer science, philosophy and ethics as well as domain-specific case studies to extend those perspectives. To further support future research, we offer a list of questions we recommend one to ask when analyzing AI-induced deskilling, upskilling and reskilling as well as for designing and implementing human-machine systems. 

  • Is the use of AI for a process HITL, HOTL or HOOTL (full automation)? 
  • If HOOTL, has all care been taken to consider unintended individual or collective effects on the organization? Is the choice between active participation (HITL) and subsequent validation (HOTL) optimized to activate, support, and maintain human competences?
  • Does learning occur for both humans and the  system, leading to  existing skills and competencies of the workforce being maintained and augmented?
  • If not, how can the system be designed to do so? 
  • Do the workforce and  the AI system, learn from each other? And do AI system designers learn from the contexts they design for? 
  • A key difference between human and machine learning: humans learn continuously (constantly updating their model of the world) whereas deep learning systems have to be retrained every time new information is added
  • How is the learning process designed and implemented?
  • What type of learning/adaptation occurs?
  • Distinguish between trial and error learning and diffusion-based learning
  • Do humans learn about i) the context (situational knowledge) ii) themselves (meta cognition) or iii) the model (underlying structure)
  • Is human learning happening at the individual or the organizational level? 
  • Could the HI-system be evolved to adapt to individual differences and also be used to bridge communication divides in the organization?
  • What are the time-scale and underlying mechanisms of the learning loop? 
  • How does the evolution of organizational culture influence the adoption of AI technologies and the learning in such systems.

In conclusion, we believe that a Hybrid Intelligence framework can  offer managers and decision makers a means of creating  long-term gains needed to bridge the short-term productivity gap in developing and implementing sustainable models of human-machine interactions of the 21st century.

References

Coroamă, V. C., & Pargman, D. (2020, June). Skill rebound: On an unintended effect of digitalization. In Proceedings of the 7th International Conference on ICT for Sustainability (pp. 213-219).

Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. WW Norton & Company.

McAfee, A., & Brynjolfsson, E. (2017). Machine, platform, crowd: Harnessing our digital future. WW Norton & Company.

 Hosny, A., Parmar, C., Quackenbush, J., Schwartz, L. H., & Aerts, H. J. (2018). Artificial intelligence in radiology. Nature Reviews Cancer, 18(8), 500-510.

 Rinta-Kahila, T., Penttinen, E., Salovaara, A., & Soliman, W. (2018). Consequences of Discontinuing Knowledge Work Automation – Surfacing of Deskilling Effects and Methods of Recovery. 10.

Altman, R. B. (1999). AI in medicine: The spectrum of challenges from managed care to molecular medicine. AI magazine, 20(3), 67-67.

American Sociological Review 48: 824–837

Atewell, P. (1987). The Deskilling Controversy. Work and Occupations, 14(3), 323–346. https://doi.org/10.1177/0730888487014003001

Bell, S. E., Hullinger, A., & Brislen, L. (2015). Manipulated Masculinities: Agribusiness, Deskilling, and the Rise of the Businessman‐Farmer in the United States. Rural Sociology, 80(3), 285-313.

Bhardwaj, S. (2013). Technology, and the up-skilling or deskilling conundrum. WMU Journal of Maritime Affairs, 12(2), 245–253. https://doi.org/10.1007/s13437-013-0045-6

Braverman, H. (1998). Labor and monopoly capital: The degradation of work in the twentieth century (25th anniversary ed). Monthly Review Press.

Bresnahan TF, Brynjolfsson E and Hitt LM (2002) Information technology, workplace

Brugger, F., & Gehrke, C. (2018). Skilling and deskilling: Technological change in classical economic theory and its empirical evidence. Theory and Society, 47(5), 663–689. https://doi.org/10.1007/s11186-018-9325-7

Buoy Health. (2018). Buoy Health Partners With Boston Children's Hospital To Improve The Way Parents Currently Assess Their Children's Symptoms Online. Retrieved March 30, 2021, from https://www.prnewswire.com/news-releases/buoy-health-partners-with-boston-childrens-hospital-to-improve-the-way-parents-currently-assess-their-childrens-symptoms-online-300693055.html

Dellermann, D. (2020). Accelerating Entrepreneurial Decision-Making Through Hybrid Intelligence (Doctoral dissertation).

Dellermann, D., Calma, A., Lipusch, N., Weber, T., Weigel, S., & Ebel, P. (2019b). The future of human-AI collaboration: a taxonomy of design knowledge for hybrid intelligence systems. In Proceedings of the 52nd Hawaii International Conference on System Sciences.

Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019a). Hybrid intelligence. Business & Information Systems Engineering, 61(5), 637-643.

Ferris, T., Sarter, N., & Wickens, C. D. (2010). Chapter 15 - Cockpit Automation: Still Struggling to Catch Up…. In E. Salas & D. Maurino (Eds.), Human Factors in Aviation (Second Edition) (pp. 479–503). Academic Press. https://doi.org/10.1016/B978-0-12-374518-7.00015-8

Fitzgerald, Deborah. 1993. “Farmers Deskilled: Hybrid Corn and Farmers’ Work.” Technology and Culture 34:324–43.

Paré, G., Sicotte, C., & Jacques, H. (2006). The effects of creating psychological ownership on physicians' acceptance of clinical information systems. Journal of the American Medical Informatics Association, 13(2), 197-205.

Hawkins, J. (2021). A thousand brains: A new theory of intelligence (First edition). Basic Books.

Heaven, D. (2019). Why deep-learning AIs are so easy to fool. Nature, 574(7777), 163–166. https://doi.org/10.1038/d41586-019-03013-5

Hoff, T. (2011). Deskilling and adaptation among primary care physicians using two work innovations. Health Care Management Review, 36(4), 338-348.

Jacobs, C., & van Ginneken, B. (2019). Google’s lung cancer AI: a promising tool that needs further validation. Nature Reviews Clinical Oncology, 16(9), 532-533.

Kamar E (2016) Hybrid workplaces of the future. XRDS 23(2):22–25

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539

Levy, J., Jotkowitz, A., & Chowers, I. (2019). Deskilling in ophthalmology is the inevitable controllable?. Eye, 33(3), 347-348.

Lyytinen, K., Nickerson, J. V., & King, J. L. (2020). Metahuman systems = humans + machines that learn. Journal of Information Technology, 026839622091591. https://doi.org/10.1177/0268396220915917

Marcus, G. (2020). The next decade in AI: four steps towards robust artificial intelligence. ArXiv Preprint ArXiv:2002.06177.

Mascha, M. F., & Smedley, G. (2007). Can computerized decision aids do “damage”? A case for tailoring feedback and task complexity based on task experience. International Journal of Accounting Information Systems, 8(2), 73–91. https://doi.org/10.1016/j.accinf.2007.03.001

McDermott, R. (1999). Why Information Technology Inspired but Cannot Deliver Knowledge Management. California Management Review, 41(4), 103–117. https://doi.org/10.2307/41166012

Nadine B. Sarter, David D. Woods, and Charles E. Billings. 1997. Automation surprises. Handbook Hum Fact Ergon, 2, 1926-1943. 

Nibert, David (2011). "Origins and Consequences of the Animal Industrial Complex". In Steven Best; Richard Kahn; Anthony J. Nocella II; Peter McLaren (eds.). The Global Industrial Complex: Systems of Domination. Rowman & Littlefield. p. 208.ISBN 978-0739136980.

NHTSA. 2013. Preliminary Statement of Policy Concerning Automated Vehicle, May 2013. http://www.nhtsa.gov/staticfiles/ rulemaking/pdf/Automated_Vehicles_Policy.pdf

Noga, T., & Arnold, V. (2002). Do tax decision support systems affect the accuracy of tax compliance decisions?. International Journal of Accounting Information Systems, 3(3), 125-144.

of Economics 117: 339–376.

organization, and the demand for skilled labor: Firm-level evidence. Quarterly Journal

Pachidi, S., Berends, H., Faraj, S., & Huysman, M. (2021). Make way for the algorithms: Symbolic actions and change in a regime of knowing. Organization Science, 32(1), 18-41.

Parente, S. L., & Prescott, E. C. (1994). Barriers to technology adoption and development. Journal of political Economy, 102(2), 298-321.

Peng, G., Wang, Y., & Han, G. (2018). Information technology and employment: The impact of job tasks and worker skills. Journal of Industrial Relations, 60(2), 201-223.

Piva M, Santarelli E and Vivarelli M (2005) The skill bias effect of technological and organisational change: Evidence and policy implications. Research Policy 34: 141–157.

Spenner KI (1983) Deciphering prometheus: Temporal change in the skill level of work.

Sinagra, E., Rossi, F., & Raimondo, D. (2021). Use of artificial intelligence in endoscopic training: Is deskilling a real fear?. Gastroenterology, 160(6), 2212.

Stone, G. D., Brush, S., Busch, L., Cleveland, D. A., Dove, M. R., Herring, R. J., ... & Stone, G. D. (2007). Agricultural deskilling and the spread of genetically modified cotton in Warangal. Current anthropology, 48(1), 67-103.

Sutton, S. G., Arnold, V., & Holt, M. (2018). How much automation is too much? Keeping the human relevant in knowledge work. Journal of Emerging Technologies in Accounting, 15(2), 15-25.

Trösterer, S., Gärtner, M., Mirnig, A., Meschtscherjakov, A., McCall, R., Louveton, N., ... & Engel, T. (2016, October). You never forget how to drive: driver skilling and deskilling in the advent of autonomous vehicles. In Proceedings of the 8th international conference on automotive user interfaces and interactive vehicular applications (pp. 209-216).

Weiss, D. J., & Shanteau, J. (2003). Empirical assessment of expertise. Human factors, 45(1), 104-116.

Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid Intelligence. Business & Information Systems Engineering, 61(5), 637–643. https://doi.org/10.1007/s12599-019-00595-2 

Knight, W. (2017). The Dark Secret at the Heart of AI. MIT Technology Review. https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/ 

OECD. (2018). The future of education and skills—Education 2030. https://www.oecd.org/education/2030/E2030%20Position%20Paper%20(05.04.2018).pdf

Soffel, J. (2016, March 10). What are the 21st-century skills every student needs? World Economic Forum. https://www.weforum.org/agenda/2016/03/21st-century-skills-future-jobs-students/

Dellermann, D., Lipusch, N., Ebel, P., & Leimeister, J. M. (2019). Design principles for a hybrid intelligence decision support system for business model validation. Electronic Markets, 29(3), 423–441. https://doi.org/10.1007/s12525-018-0309-2