Nicky Terblanche ✉
(University of Stellenbosch Business School, Cape Town, South Africa)
There is on going debate about the potential impact of artificial intelligence (AI) on humanity. The application of AI in the helping professions is an active research area, but not in organisational coaching. Guidelines for designing organisational AI Coaches adhering to international coaching standards, practices and ethics are needed. This conceptual paper presents the Designing AI Coach (DAIC) framework that uses expert system principles to link human coaching efficacy (strong coach-coachee relationships, ethical conduct, focussed coaching outcomes underpinned by proven theoretical models) to established AI design approaches, creating a baseline for empirical research.
artificial intelligence coaching, e-coaching, chatbot coach, chatbot design, executive coaching, organisational coaching,
Accepted for publication: 17 July 2020
03 August 2020
© the Author(s)
Published by Oxford Brookes University
Coaching, as a helping profession, has made significant inroads in organisations as a mechanism to support people’s learning, growth, wellness, self-awareness, career management and behavioural change (Passmore, 2015; Segers & Inceoglu, 2012). At the same time, the rise of AI is hailed by some as the most significant event in recent human history with the potential to disrupt virtually all aspects of human life (Acemoglu & Restrepo, 2018; Brynjolfsson & McAfee, 2012; Mongillo, Shteingart, & Loewenstein, 2014). However, claims of the abilities and potential of AI are often overstated and it seems unlikely that we will have AI that matches human intelligence in the near future (Panetta, 2018). This does not mean that AI is not already having a meaningful impact in many contexts, including helping professions, such as healthcare and psychology (Pereira & Diaz, 2019). It seems poised for further refinement, growth and possible disruption and it is therefore inevitable that all coaching stakeholders will need to pre-emptively consider how to leverage, create and adopt AI responsibly within the coaching industry (Lai, 2017). The use of AI in organisational coaching is under-researched and specifically, it is not known how to effectively design an organisational AI Coach.
For the purposes of this paper, ‘coaching’ is defined as ‘a human development process that involves structured, focused interaction and the use of appropriate strategies, tools and techniques to promote desirable and sustainable change for the benefit of the client and potentially for other stakeholders’ (Bachkirova, Cox, & Clutterbuck, 2014, p. 1). Furthermore, this paper limits its scope to organisational coaching that includes genres such as executive coaching, workplace coaching, managerial coaching, leadership coaching and business coaching (Grover & Furnham, 2016). The organisational coaching industry is growing rapidly and has become a global phenomenon used by numerous organisations worldwide to develop their employees (Theeboom, Beersma, & Vianen, 2013). As a growing industry and emerging profession, coaching evolves continuously and it seems inevitable that AI will play an increasingly significant role in organisational coaching in the future.
With sparse research available on the creation and application of AI in organisational coaching, this conceptual paper asks what needs to be considered, in principle, when designing an AI Coach. In answer, the Designing AI Coach (DAIC) framework is presented. This framework uses principles from expert systems to guide the design of AI Coaches based on widely agreed predictor of coaching success: strong coach-coachee relationships (De Haan et al., 2016; Graßmann, Schölmerich & Schermuly, 2019; McKenna & Davis, 2009), ethical conduct (Diochon & Nizet, 2015; Gray, 2011; Passmore, 2009) and focussed coaching outcomes all underpinned by proven theoretical models (Spence & Oades, 2011).
This paper proceeds as follows: It starts by contextualising AI Coaching within the current organisational coaching literature and shows that current definitions are inadequate. Next a brief overview of AI is provided where it is argued that chatbots, a type of AI and expert system, have potential for immediate application in organisational coaching. Since chatbot AI Coaching in organisations has not been well researched, this paper explores how chatbots have been designed and applied in related fields. The perceived benefits and challenges of coaching chatbots are described. Finally, the novel DAIC framework is presented before concluding with suggestions for further research.
Although the purpose of this paper is not to provide a systematic literature review of AI in organisational coaching, a literature search was conducted to understand how AI is currently positioned within organisational coaching. Using Google Scholar a search for ‘artificial intelligence coach’ and ‘artificial intelligence coaching’ did not produce any peer reviewed journal articles on AI and organisational coaching within the first 10 results pages. Neither did replacing ‘coach/coaching’ with ‘organisational coach/coaching’ yield any desired results. A number of papers on AI Coaching in healthcare did however appear, and these papers used the term ‘e-coaching’ to describe the use of AI in that context, with Kamphorst (2017) providing a particularly useful overview.
Using ‘e-coaching’ as a search term revealed a number of relevant results in relation to coaching. Clutterbuck (2010, p. 7) described e-coaching as a developmental relationship, mediated through e-mail and potentially including other media. E-coaching is described by Hernez-Broome and Boyce (2010, p. 285) as ‘a technology-mediated relationship between coach and client’. Geissler et al. (2014, p. 166) defined it as ‘coaching mediated through modern media’, while Ribbers and Waringa (2015, p. 91) described e-coaching as ‘a non-hierarchical developmental partnership between two parties separated by a geographical distance, in which the learning and reflection process was conducted through both analogue and virtual means’.
In healthcare and psychology, the search for ‘e-coaching’ revealed a broader definition with applications like the promotion of physical activity (Klein, Manzoor, Middelweerd, Mollee & Te Velde, 2015); regulating nutritional intake (Boh et al., 2016); treatment of depression (Van der Ven et al., 2012); and insomnia (Beun et al., 2016). In these domains, ‘e-coaching’ refers to the process of not just facilitating the coaching, but includes autonomous entities doing the actual coaching (Kamphorst, 2017, p. 627). This extended definition contrasts with the organisational coaching literature where currently ‘e-coaching’ seems to refer only to varying communication modalities between a human coach and client. To demarcate what this paper argues to be a new area of practice and research in organisational coaching, a term to capture the use of autonomous coaching agents that could completely replace or at least augment human coaches is proposed: Artificial Intelligence (AI) Coaching. It is proposed that AI Coaching be defined independent of e-coaching to clearly distinguish it as a type of coaching entity and not merely another coaching modality. To fully grasp the concept of AI Coaching, a basic understanding of the nature of AI itself is required.
‘Artificial Intelligence’ may be defined as ‘the broad collection of technologies, such as computer vision, language processing, robotics, robotic process automation and virtual agents that are able to mimic cognitive human functions’ (Bughin & Hazan, 2017, p. 4). AI is also described as a computer program combined with real-life data, which can be trained to perform a task and can become smarter about its users through experience with its users (Arney, 2017, p. 6). Another view states that AI is a science dedicated to the study of systems that, from the perspective of an observer, act intelligently (Bernardini, Sônego, & Pozzebon, 2018). AI research started in the early 1950s. It is an interdisciplinary field that applies learning and perception to specific tasks including potentially coaching.
A distinction can be made between artificial general intelligence (Strong AI) and artificial narrow intelligence (Weak AI). Strong AI is embodied by a machine that exhibits consciousness, sentience, the ability to learn beyond what was initially intended by its designers and can apply its intelligence in more than one specific area. Weak AI focusses on specific, narrow tasks, such as virtual assistants and self-driving cars (Siau & Yang, 2017). Expert systems are considered a form of Weak AI and is described as complex software programs based on specialised knowledge, able to provide acceptable solutions to individual problems in a narrow topic area (Chen, Hsu, Liu & Yang, 2012; Telang, Kalia, Vukovic, Pandita & Singh, 2018).
To match a human coach, a Strong AI entity would be needed since it promises to do everything a human can do, and more. This field of research is however in its infancy with some projections indicating that we may not see credible examples of Strong AI in the foreseeable future (Panetta, 2018). The implication of this is that it is highly unlikely for an AI entity to convincingly perform all the functions that a human coach currently performs any time soon.
While Strong AI may not yet be a possibility for coaching, Weak AI in the form of expert systems provides options worth exploring. For the purpose of this discussion, the focus therefore is on a particular embodiment of Weak AI that is currently showing potential for application in coaching, namely conversational agents or chatbots.
A ‘conversational agent or chatbot system’ is defined as a computer programme that interacts with users via natural language either through text, voice, or both (Chung & Park, 2019). Chatbots typically receive questions in natural human language, associate these questions with a knowledge base, and then offer a response (Fryer & Carpenter, 2006). Various terms are used to describe chatbots, including conversational agents, talkbots, dialogue systems, chatterbots, machine conversation systems and virtual agents (Saarem, 2016; Saenz, Burgess, Gustitis, Mena, & Sasangohar, 2017). The origins of chatbot type systems can be traced back to the 1950s, when Allan Turing proposed a five-minute test (also known as the Imitation Game) based on a text message conversation, where a human had to predict whether the entity they were communicating with via text was a computer program or not (Turing, 1950).
Two famous chatbots of the past are Eliza, developed in 1966 and PARRY, developed in the 1970s. Eliza imitated a Rogerian psychologist, using simple pattern matching techniques to restructure users’ sentences into questions. Not withstanding the simplistic approach, its performance was considered remarkable, partly due to people’s inexperience with this type of technology (Bradeško & Mladnic, 2012). PARRY imitated a paranoid person and when its transcripts were compared to real paranoia patients, psychiatrists were able to distinguish between the two sets only 48% of the time (Bradeško & Mladnic, 2012). More recently, chatbots have found new applications in the services industry where they are used to assist with customer queries, advice and fulfilment of orders (Araujo, 2018). Chatbots have proliferated with more than 100 000 chatbots created in one year on Facebook Messenger alone (Johnson, 2017).
As a form of Weak AI and expert system, chatbots are usually designed by a set of scripted rules (retrieval-based), AI (generative-based), or a combination of both to interact with humans (De Angeli & Brahnam, 2008). Driven by algorithms of varying complexity and optionally employing AI technologies such as machine learning, deep learning and Natural Language Processing, Generation and Understanding (NLP, NLG, NLU), chatbots respond to users by deciding on the appropriate response given a user input (Neff & Nagy, 2016; Saenz et al., 2017). From the expert system perspective, chatbots attempt to mimic human experts in a particular narrow field of expertise (Telang et al., 2018).
In organisational coaching, no empirical studies on the design and effectiveness of organisational coaching chatbots seem to be available. A broader assessment including fields related to organisational coaching such as health, well-being and therapy provide some evidence of the application of chatbots (Laranjo da Silva et al., 2018). Research has been conducted on the efficacy of chatbots that assist people with aspects such as eating habits, depression, neurological disorders and promotion of physical activity (Bickmore, Schulman, & Sidner, 2013; Bickmore, Silliman et al., 2013; Pereira & Diaz, 2019; Watson, Bickmore, Cange, Kulshreshtha & Kvedar, 2012).
Research from the healthcare domain claims that AI Coaching provides a wide range of strategies and techniques intended to help individuals achieve their goals for self-improvement (Kamphorst, 2017; Kaptein, Markopoulos, De Ruyter, & Aarts, 2015) and can potentially play an essential role in supporting behavioural change (Kocielnik, Avrahami, Marlow, Lu, & Hsieh, 2018). Other advantages of chatbot coaches include the possibility of interacting anonymously, especially in the context of sensitive information (Pereira & Diaz, 2019). People who interact with chatbots may therefore feel less shame and be more willing to disclose information, display more positive feelings towards chatbots and feel more anonymous, as opposed to interacting with real humans (Lucas, Gratch, King, & Morency, 2014). This is especially important in organisational settings where different stakeholders are involved in the coaching process and coachees are often caught between the firm’s expectations and their own needs (Polsfuss & Ardichvili, 2008).
Although research on actual efficacy and benefits of coaching chatbots is rare, there seems to be agreement that a chatbot can do the following (Bii, 2013; Driscoll & Carliner, 2005; Klopfenstein, Delpriori, Malatini & Bogliolo, 2017):
However, there are also still numerous unsolved challenges regarding chatbots described in literature (Britz, 2016), including:
The main challenge of creating realistic chatbots is seen to be the difficulty to maintain the ongoing context of a conversation (Bradeško & Mladenic, 2012). Current approaches use pattern-matching techniques to map input to output, but this approach rarely leads to purposeful, satisfying conversations. Understanding the benefits offered by chatbots enable us to better understand how they can realistically contribute to the organisational coaching domain.
Given the current use and perceived benefits of chatbot AI Coaches in related fields, the question about how to design these entities for organisational coaching arises. In an attempt to answer this question, an expert system design approach was followed. Expert system design dictates that the system should be modelled on how human experts execute a task (Lucas & Van Der Gaag, 1991). For AI Coaching this implies stipulating what constitutes effective human coaching, and mapping this to acknowledged chatbot design principles. This approach was followed to derive the DAIC framework. The two facets of the DAIC framework, effective human coaching and chatbot design principles are discussed next, where after the framework itself is presented.
Following the expert system approach, a knowledge-base based on how human coaches operate effectively guided the development of the DAIC framework and consists of four principles: (i) widely agreed human-coach efficacy elements; (ii) the use of recognised theoretical models; and (iii) ethical conduct. The fourth principle (narrow coaching focus) stems from the inherent limitations of chatbots as Weak AI and expert systems namely the ability to only focus on one particular task. Each principle is elaborated on next.
To determine what constitutes the first design principle (widely agreed human-coach efficacy elements), the actively researched and growing body of knowledge on ‘how’ coaching works (De Haan, Bertie, Day, & Sills, 2010; Theeboom et al., 2014) was consulted. It appears that there are different opinions on the matter. Grant (2012, 2014) found that goal-orientation is the most important determinant of coaching success while Bozer and Jones (2018) identified seven factors that contribute to successful workplace coaching: self-efficacy, coachee motivation, goal orientation, trust, interpersonal attraction, feedback intervention, and supervisory support. While there are varying opinions, a number of scholars agree that one aspect contributes more than any other to coaching success: the coach-coachee relationship (De Haan et al., 2016; Graßmann, Schölmerich & Schermuly, 2019; McKenna & Davis, 2009).
Aspects that help build a strong coach-coachee relationship include trust, empathy and transparency (Grant, 2014; Gyllensten & Palmer, 2007) with trust in particular being linked to higher levels of coachee commitment to the coaching process (Bluckert, 2005). Human coaches can build trust by being predictable, transparent (about their ability) and reliable (Boyce, Jeffrey Jackson & Neal, 2010). Perceived trustworthiness of another person is another important contributor to strong relationships (Kenexa, 2012). As a construct, trustworthiness consists of three elements. Ability is the trust instilled by the skills and competencies of a person (Mayer, Davis & Schoorman, 1995). Benevolence refers to the perception of being acted towards in a well-meaning manner (Schoorman, Mayer & Davis, 2007). Integrity is a measure of adherence to agreed-upon principles between two parties (Mayer et al., 1995). In summary, it appears that the following aspects of a coach are important contributors to strong coaching relationships and resultant successful coaching interventions: trust, empathy, transparency, predictability, reliability, ability, benevolence and integrity.
The second principle of the DAIC framework, is the need for evidence-based practice (Grant, 2014). One of the criticism often levelled at coaching is that practitioners use models and frameworks that are borrowed from other professions without having been empirically verified for the coaching context (Theeboom et al., 2013). Therefore, in addition to a strong coach-coachee relationship, an AI Coach must also be based on theoretically sound coaching approaches (Spence & Oades, 2011).
The third principle that underpins the DAIC framework is ethically sound practice. Ethics in coaching is an important and active research area (Diochon & Nizet, 2015; Gray, 2011; Passmore, 2009). The introduction of intelligent autonomous AI Coaching systems raises unique ethical concerns (Kamphorst, 2017). These concerns are underscored by users’ increasing demand for assurance that the algorithms and AI used in their AI Coaches are structurally unbiased and ethical. Intended users of technologies like chatbots must be confident that the technology will meet their needs, will align with existing practices, and that the benefits will outweigh the detriments (Kamphorst, 2017).
Four pertinent types of ethical and legal issues were identified by Kamphorst (2017) and are applicable to AI Coaches in the context of organisational coaching: (i) privacy and data protection; (ii) autonomy; (iii) liability and division of responsibilities; and (iv) biases.
Firstly, both the need and the ability of AI Coaching systems to continuously collect, monitor, store, and process information raise issues regarding privacy and data protection. Questions such as ‘who owns and can access the data obtained from a coaching session?’ must be made clear to users.
Secondly, since AI Coaching combines the paradigm of empowering people and enhancing self-regulation while simultaneously entering their spheres, the personal autonomy of the users may be affected in relatively new ways, both positive and negative. It also raises the question of how to deal with potential manipulation by an AI Coaching system. Autonomous AI Coaching systems may offer users suggestions for action, thereby affecting their decision-making process (Luthans, Avey & Patera, 2008). Being influenced by its decision making seems to conflict with the classical understanding of self-directedness as professed in coaching.
Thirdly, many stakeholders of various levels of diversity, specialisation, and complex interdependencies are involved in creating an AI Coaching system. Therefore, the division of liabilities and responsibilities among the relevant stakeholders involved (producers, providers, consumers, and organisations) cannot be ignored. Creating responsible AI Coaches also require alertness to the possibility that some clients need to work with a different specialist and not a coach (Braddick, 2017). The acceptance of AI Coaching applications will be constrained if the design and use of the system adhere to a different set of ethical principles than their intended users.
Lastly, machine learning typically used in AI relies on large amounts of data. Data originates from many sources and data is not necessarily neutral. Care must be taken to ensure that potential biases inherent in data are not transferred to the AI Coach via the learning process, or if not avoidable, these biases must be made explicit (Schmid Mast, Gatica-Perez, Frauendorfer, Nguyen & Choudhury, 2015).
AI Coaching in the organisational context presents additional ethical challenges. In traditional human-to-human coaching, contracting for coaching in organisations typically involves three parties: the coach, the coachee and the sponsoring organisation paying for the coaching. Although the organisation pays for the coaching, there is usually a confidential agreement between coach and coachee to the exclusion of the organisation (Passmore & Fillery-Travis, 2011). If an AI Coach is used and paid for by the organisation, the ethical question about who owns the details of the conversation must be made clear. It would potentially be unethical for the organisation to have access to the AI Coach-coachee conversation.
The final principle that underpins the DAIC framework relates to coaching focus. In traditional human-to-human coaching, several coaching facets could be pursued simultaneously, including for example goal-attainment, well-being, creation of self-awareness and behavioural change. Weak AI, however, at best acts in a narrow, specialised field (Siau & Yang, 2017). A prerequisite imposed by Weak AI and specially expert systems is that the focus of the system must be limited to a narrow area of expertise (Chen et al., 2012). This implies a specific coaching focus. The focus of chatbot AI Coaches should therefore initially be limited to one aspect typically associated with coaching.
There are five chatbot design principles included in the DAIC framework: (i) level of human likeness; (ii) managing capability expectations; (iii) changing behaviour; (iv) reliability; and (v) disclosure.
The first principle raises the question of how human-like a chatbot AI Coach should be. Based on the desired human coach attributes described earlier, it seems logical that a chatbot AI Coach would need identity and presence as well as the ability to engage emotionally (Xu, Liu, Guo, Sinha, & Akkiraju, 2017). This is not an easy problem to solve as demonstrated by the ‘uncanny valley’ phenomenon, where objects that visually appear very human-like trigger negative impressions or feelings of eeriness (Ciechanowski, Przegalinska, Magnuski & Gloor, 2019; Sasaki, Ihaya & Yamada, 2017). Creating a chatbot that closely mimics a human is therefore counter intuitively not necessarily the best approach. Ciechanowski et al. (2019), for example, showed that people experience less of the ‘uncanny effect’ and less negative effect when interacting with a simple text-based chatbot as opposed to a more human-like avatar chatbot. That said, the uncanny valley is a continuum, implying that some human-like aspects may be beneficial. Araujo (2018), for example, showed that when a chatbot employs human-like cues, such as having a name, conversing in the first person and using informal human language including ‘hello’ and ‘good-bye’, users experienced a higher level of social presence than if these factors are absent. These cues automatically imply a sense of human self-awareness or self-concept by the chatbot, making it more anthropomorphic and relatable than a chatbot without a self-concept (Araujo, 2018).
For the second principle, it is important to set and manage expectations of the AI’s capabilities by being clear on its limitations (Lovejoy, 2019). If clients interacting with a chatbot AI Coach expect the same level of intelligence as from a human coach, they are bound to be disappointed, which will in turn jeopardise the trust relationship. The chatbot AI coach must therefore clearly communicate its purpose and capabilities (Jain et al., 2018).
The third principle relates to the fact that the chatbot AI coaches could and likely will change their behaviour as they learn from continued usage. Users must therefore be made aware that their interactions are used to improve the AI Coach and that because of this, the interactions may change. It must also be clear that the AI Coach could ask for feedback from the user on how it performs (Lovejoy, 2019).
The fourth principle, reliability addresses the fact that because an AI Coach is continuously learning, it is bound to make mistakes. When it fails, the AI Coach needs to do so gracefully and remind the user that it is in an on going cycle of learning to improve (Lovejoy, 2019; Thies et al., 2017).
The fifth principle, disclosure states that even though the aim of an AI Coach is to eventually replace a human coach, at this stage of technological maturity, it is probably best to clearly communicate to the user that the AI Coach is in fact not a human and does not have human capabilities. This knowledge may assist users in managing their expectations and not, for example, disclose information that they would to a human coach (Lee & Choi, 2017).
Having discussed the two facets that inform the DAIC framework (human-coach effectiveness and chatbot design principles), this paper now presents the organisational DAIC Framework in Figure 1. The framework postulates that an effective chatbot AI Coach must focus on a specific coaching outcome, such as goal-attainment, well-being, self-awareness or behavioural change. Furthermore, the internal operating model of the chatbot must be based on validated theoretical models that support the specific coaching outcome. In addition, the important predictors of coaching success (a strong coach-coachee relationship) must be embedded in the chatbot interaction model. Finally, the chatbot’s behaviour must be guided by an acceptable organisational coaching ethical code, all the while being cognisant of the requirements, restrictions and conventions of a typical organisational context.
To implement the DAIC framework, chatbot design best practices must be used. Table 1 provides a mapping between aspects of strong coach-coachee relationships and chatbot design considerations.
The use of AI in the helping professions is a relatively new research area, and even more so in organisational coaching. Numerous opportunities exist for scientific investigation with a focus on the application of Weak AI in the form of chatbots. Two broad areas of research need immediate focus: (i) efficacy studies looking at how well a chatbot coach is able to fulfil certain coaching tasks; and (ii) technology adoption studies considering the factors that encourage or dissuade users from using chatbot coaches.
In terms of coaching efficacy studies, typical research focus areas in human coach studies could be performed with AI Coaches including how effective an AI Coach is to help a client with, for example, goal attainment, self-awareness, emotional intelligence, well-being and behavioural change. How do these results compare to clients using human coaches? As with human coach research, it is important to design robust studies that employ large-scale, random control trials, longitudinal research strategies. A positive aspect of using AI Coaches once they have been built, is that it is much cheaper to perform large-scale studies since there is no need to recruit and compensate human coaches. Logistically there are also fewer barriers, since chatbot AI Coaches are typically available on smartphone devices, tablets and personal computers.
In terms of technology adoption research, questions about which factors influence the adoption of a chatbot AI Coach by users need answering. What influences trust in AI Coaching? What role does the personality type of the client play in trusting and engaging with a chatbot coach? What level of humanness of a chatbot is optimal, for example, should a chatbot have a gender? When is it appropriate for the AI Coach to ask for user specified input (a much more difficult AI problem to solve) versus presenting users with predefined options? What should be included in the initial conversation to set realistic expectations and build trust, for example, what is the optimal balance between a chatbot trying to be as human as possible or admit that it has limitations? Which factors play a role in technology adoption? The well-known Technology Adoption Model (TAM) (Davis, Bagozzi, & Warshaw, 1989) and its numerous variants could be used to explore answers to these questions.
Perhaps researchers could use existing coaching competency frameworks, such as those of the International Coach Federation (ICF), as a guide to evaluate AI Coaches. One approach could be to ask credential adjudicators of various coaching bodies to officially evaluate the AI Coach. A cursory glance at the ICF competency model (ICF, 2017) suggests that currently, existing coaching chatbots could very well already pass some of the entry-level coach certification criteria.
Finally, it must be acknowledged that modelling AI Coaches on human coaches, the approach taken by this paper may not be the most optimal. It could be that AI Coaches need to have distinctly different skills and characteristics to human coaches. However, since no empirical evidence currently exists to prove or refute this assumption, this paper argues that in order to gathering empirical evidence, it is acceptable to start with the human-based expert system approach as a baseline. In time and with more empirical evidence, the true nature of AI Coaches will hopefully emerge.
AI is not currently sufficiently advanced to replace a human coach and given the trajectory of development in Strong AI, it is unlikely that we will see an AI Coach match a human coach any time soon. Human coaches will continue to outperform AI Coaches in terms of understanding the contexts surrounding the client, connecting with the client as a person, and providing socio-emotional support. However, AI technology will inevitably improve as machine learning and the processing and understanding of natural language continues to improve exponentially, leading to AI Coaches that may excel at specific coaching tasks.
In order to guide and monitor the rise of AI Coaches in organisational coaching, the various stakeholders, such as practicing coaches, coaching bodies (such as ICF, COMENSA and EMCC), coach training providers and purchasers of coaching services (such as Human Resource professionals), are encouraged to educate themselves on the nature and potential of AI Coaching. They could actively participate in securing an AI Coaching future that ethically and effectively contributes to the coaching industry. It is hoped that the DAIC framework presented in this paper will provide some direction for this important emerging area of coaching practice and research.
Dr Nicky Terblanche is a senior lecturer/researcher in the MPhil in Management Coaching and MBA Information Systems at the University of Stellenbosch Business School.
RADAR: Research Archive and Digital Asset RepositoryAbout RADAR