International Journal of Evidence Based Coaching and Mentoring
2021, S15, pp.187-197. DOI: 10.24384/er2p-4857

Academic Paper

Augmenting Coaching Practice through digital methods

Kevin Ellis-Brush

PDF

Introduction

Chatbots present artificial social actors to the coaching field as a transformational technology that has the potential to democratise the helping profession. The notion that a robot can coach a human was, until only relatively recently, confined to the realms of futurologists and writers of science fiction. The concept of a computer replicating subtle, nuanced conversational human interaction has been a quest for technologists since the early 1950s (Turing, 2009). The dawning of the fourth Industrial Revolution is transforming the capacity of machines to perform tasks that can influence, alter and direct human behaviours. Schwab (2016) considers that the technological revolution will be of such a profound nature that humankind will need to respond across all aspects of society. The coaching field will not be sheltered from such innovations. One manifestation of such technological developments are coaching chatbots accessed on smartphones. However, coaching chatbots and their software architecture require further development before artificial intelligence (AI) coaches can match the talents of human coaches (Terblanche, 2020). Despite the technological challenges, computer scientists and business entrepreneurs continue to invest their talents and resources in developing self-helping apps, presenting the coaching field with an opportunity to adopt sophisticated machines to further aid its clientele.

Technologists designing coaching apps confront intrinsic complexity as they attempt to replicate the skills possessed by human coaches. Kamphorst (2017) proposes that the level of design of these systems should contain ‘computerised components that constitute an artificial entity that can observe, reason about, learn from and predict a user’s behaviours, in context and over time, and that engages proactively in an on-going collaborative conversation with the user to aid planning and to promote effective goal striving through the use of persuasive techniques.’ (p. 629). The collaborative nature of the interface echoes the characteristic of a relationship widely believed to be a critically successful factor in traditional coaching. The issue of a human-like relationship between an artificial, mathematically created, actor and a coachee appears to be an oxymoron and worthy of further enquiry. Whilst neighbouring fields have sought to research this apparent contradiction, much work is still required.

Related medical professions continue to investigate the efficacy and application of AI-powered software programmes that alter individuals’ psychological behaviours (Hayes, 2020; Fiske et al., 2019). The profession of psychotherapy is one field where there has been a concerted effort to enable computer-guided cognitive behavioural therapy (CCBT), where the main focus is to enable interactions between therapist and client using technology (Peck, 2010). In this profession, practitioners have sought to be guided in their dialogues with their clients through the use of technology, and benefitted from using CCBT. In a meta-analysis of 49 randomised control trials, the effectiveness of CCBT over other therapeutic interventions indicated that the technology was equally effective and, in some cases, more so, in helping treat common mental health disorders compared to traditional methods (Grist and Cavanagh, 2013).

In exploring the literature more widely, there appear to be no proponents that suggest that the relationship between coach and coachee should be one that is on an algorithmic transactional basis or deemed as remote. Indeed, coaching is widely considered to require the formation of a personal relationship between a coach and a coachee (Bluckert, 2005; Ghods, 2009; Ole Michael et al., 2016; Walton, 2014; Alvey and Barclay, 2007). From this point of consensus, writers, practitioners, and academics have provided a plethora of varying beliefs on the qualities and key components that dyads should possess to achieve a successful outcome for the coachee. The degree of personal engagement demanded varies according to the coaching assignment. According to Hawkins and Smith (2014), transformational coaching lies at the end of a continuum from a transactional transfer of knowledge within the skill coaching genre to a higher level of engagement required in transformational coaching assignments. Along this suggested continuum sit performance and development coaching, as shown in Figure 1. The illustration also overlays concepts of the degree of working coaching alliance that could be expected to occur along the continuum.

Figure 1: Coaching assignments vs working alliance

Illustration of an amalgam of coaching continuum Hawkins and Smith (2014) and coaching relationship Sun et al. (2013)

There is a clear contrast between the levels of personal engagement required of coaches when confronted by different coaching assignments. Optimal coaching alliances have a contextual dimension; skills coaching requires less intrusion into the coachee’s psychology and could be more of a didactic process, whereas a transformational coaching assignment requires a coach to explore underlying beliefs, attitudes, emotions and cognitive behaviours (thinking patterns) (Hawkins and Smith, 2014; O’Broin and Palmer, 2009). Arguably, the latter requires greater abilities in the coach and the need for higher investment in the relationship, where authenticity and co-working between coach and coachee in determining goals and objectives come to the fore. The developers of coaching apps would, therefore, be wise to concentrate their efforts on coaching skills where the level of mastery of all the human qualities is of less importance than replicating a human-to-human enterprise. Accordingly, this study sought a coaching app designed to enhance a specific human emotional skill through the lens of a working alliance.

A universal model of the coaching relationship has been suggested and given the term of coaching alliance (O'Broin and Palmer, 2010) and appears interchangeable with the term working alliance. The working alliance is founded in psychotherapy and consists of three aspects: namely, goals, tasks and bonds (Bordin, 1979). Current models of chatbot coaching apps have been designed to mirror these three concepts, with the majority of software programs establishing goals and tasks with their users on platforms that seek to create a bond with the technology. Despite new artificial coaching agents being developed by technologists, there appears very little empirical research on their efficacy or the underlying coaching processes in play. This study has attempted, in part, to address this issue to explore whether a working alliance, a key relationship component and predictor of positive outcomes, can develop between a coachee and an artificial agent, and deliver successful outcomes.

The next section details the operationalisation of the research enquiry, followed by the presentation of one principal finding. The paper will conclude with a discussion on the contribution to knowledge and implications to the field of coaching, and suggestions for further areas of research.

Methodology

Prior to discussing this study’s methodology, there is value in highlighting that the literature review concerning the topic provided an insight into the various methodologies and their accompanying philosophical positions employed by researchers. Perhaps unsurprisingly, positivist and post-positivist positions were normally encountered in computer technology papers identifying the natural sciences’ traditions of hypothesis testing. The corollary is that the studies that examined psychological coaching methods in change theory had worldviews of constructivism. A further conundrum found in the background reading was that the ontological position of AI is unclear. Posing questions of reality and being is particularly relevant when considering this field of study. Should the algorithms within the software architecture of a coaching app that attempts to mimic a human coaching experience be considered an entity beyond human interaction, or is it that humans anthropomorphise technology to make it a real entity? The ontological position of whether a piece of software can be regarded as an agent was considered by Hawley (2019), who suggested that scholars in theology and philosophy may form collaborative partnerships to help navigate the challenges of establishing an ontology of AI. Notwithstanding this on-going debate, the author’s ontological position was that of objectivism; facts are independent of the human mind, and reality is discoverable through scientific process and research. The study’s nature of social reality having thus been described; the paper continues by presenting the operationalisation of the research enquiry.

To increase the reliability of the research, the coaching app was studied by targeting a specific user population, with similar education profiles and career progressions. The gatekeeper company from which the volunteers were sought was situated in the banking sector. The total number of participants recruited for the study was 48. The cohort was asked to use a coaching chatbot called WYSA. WYSA is a downloadable software app chatbot that is accessed via a smartphone. It employs AI, natural language processing (NLP) and machine learning (ML) algorithms. It has specifically crafted algorithms to create the conversation that it delivers (a computer software program designed to simulate a conversation with a human) to help build an individual’s self-resilience through evidence-based and validated tools and techniques, such as cognitive behavioural therapy (CBT). Users of the chatbot engage in one-to-one text conversations with WYSA, represented by a small penguin-like digital image. The participants used the app over eight weeks after undergoing the initial acclimatisation with the technology in week -1. The research design consisted of two distinct elements. First, a validated questionnaire was distributed to all the participants, testing the individuals’ self-resilience and working alliance with the technology. This was administered at T1 (one week after the start) and at T2 (eight weeks later, i.e., one week after the end of the study). Second, semi-structured interviews were held at T1 and T2 with several junior managers randomly selected from the study population.

The on-line questionnaire used established scales to measure the impact of the coaching on self-resilience (Naswall et al., 2015), and the working alliance formed with the coaching app (Horvath and Greenberg, 1989). The in-depth interviews were semi-structured and conducted either in person or remotely via a web video platform. The analysis of the qualitative data adopted a six-phase approach as advocated and demonstrated in work by Braun et al. (2018). The final data analysis comprised the convergence of the quantitative and qualitative findings, following the tenets of Datta (2001).

Main Findings

The triangulation of the results from the two distinct methods gave rise to corroboration of the findings and created greater clarity in the meanings that emerged from the separate methodologies. In addition, the different conclusions from the results provided the opportunity to explore the phenomenon from an alternate standpoint. The next section is presented in two sub-sections to directly answer the research enquiry: namely, the extent to which a working alliance between a coachee and an artificial coach can be achieved and whether an artificial social actor can enhance the developable human behavioural capability of self-resilience.

Working Alliance between Coaching App and Coachee

There was agreement in the findings from both the quantitative and qualitative results concerning working alliance. The convergence of findings between the quantitative and qualitative results suggested that the majority of participants developed no working alliance as defined by Bordin (1979).

A Wilcoxon signed rank test was applied to the construct that individuals developed a working alliance with the technology over the test period. Using group variables TOT_working_alliance_T1 and TOT_working_alliance_T2, the results did not reach a level of significance z=.672, p<.502 Asymp.Sig (2-tailed) with a small effective size (r=.0.068). The results in Table 4.7 did reveal that a majority of individuals (n=26) self-scored a reduction in working alliance with the app at the end of the test period.

Table 4.7 Working Alliance with coaching app (Wilcox signed rank test)

The interviews explored whether the interviewees had formed a coaching relationship (working alliance) with the technology by the end of the intervention. To help frame this, the semi-structured interviews explored the interviewee’s perceptions of a working alliance with the app (Bordin’s model: Goal, Task and Bond). An overview of both sets of data (T1 and T2) showed that participants appeared apathetic towards whether they formed a relationship with the technology. The words, phrases and metaphors recorded were similar in nature and content at T1 and T2. Data from the transcripts suggested a social remoteness to the technology, albeit there were a few notable individual exceptions.

Most users of the app reported more of a transactional interaction with the app, rather than any form of ‘bond’, the examples below demonstrate a distant and superficial engagement with the technology.

“I’m not sure if you can get friendly with an app! Yes, it had a smiley face and some friendly and amusing methods – but I never say that I got a sense that it cared about me”

“Aren’t relationships between people? Not sure if I understand the question”. [prompted by the interviewer that you can frame a relationship in a wider context] “Ok, well I think that WYSA was fun to use and I did like that I could share thoughts of frustration – letting off a bit of steam – knowing that it would go not beyond it and me”

There were some notable exceptions in the extent to which individuals shared personal details, some expressing a sense of fun with the interaction:

“Definitively playing not working! But I did use WYSA whilst at work”.

“Basically, I find the interactions very approachable and a bit like talking to a friend on WhatsApp”

A common theme reported by the participants was that the chatbot was fairly superficial and there was a sense of wishing for greater depth, especially if they compared and contrasted a human experience of team working. There was a sense that users were looking for a more substantive collaborator:

“Team working for me is a much more a two-way – giving feedback and receiving feedback. WYSA was more like a guide not a collaborator”

“I was in a tricky situation at work, deadlines mounting, and challenging targets being set. So, I could have really used some additional support and whilst the app helped a little, I think it could have helped if it understood the real challenge I faced”

The above quotes were a selection of responses to questions around ‘bond’ at T1 and T2, and it is difficult to distinguish between both sets of interviews, therein suggesting that the ‘bond’ component of working alliance was either not formed or failed to develop.

There were some notable exceptions revealed in the qualitative findings, where individuals disclosed intimate personal details, suggesting a high degree of trust in the technology had developed over the testing period.

“Basically, I find the interactions very approachable and a bit like talking to a friend on WhatsApp”

A gender optic was employed in both the quantitative and qualitative data analyses when scrutinising the working alliance. No level of statistical significance was established in respect of working alliance inventory or any component thereof, and the qualitative findings revealed no patterns relating to gender. However, there were a few notable exceptions, where trust in the app by a number of women was seemingly very high to the extent that they would share intimate and personal details. The notion of intimacy with the technology, where unbridled feelings were expressed, suggests an environment for reflexivity practice.

The qualitative findings also revealed that while there is a spectrum of opinions of the short-term capability of the technology to coach individuals, there was an acknowledgement that this would improve over time and that it could, at some future date (the range suggested was between 5 to 25 years), replicate a human-to-human coaching experience. This curiosity in technology’s future role in coaching conveyed by the cohort of interviewees appeared to increase between T1 and T2. Conversations held at T1 revealed considerations of the app’s current technical capabilities and interest was expressed in how the technology was engaging with the user. By T2, individuals were questioning the possible future role such technologies could play and envisaging other methods of engaging with AI devices as an aid to their personal growth.

At this stage, it is important to remind the reader that the study’s volunteer population were, in the main, individuals that used technological platforms at their place of work that support global teamwork. The quantitative findings clearly showed the high degree of computer self-efficacy and the acceptance of digital platforms. This display of comfort with other forms of technology could translate into the cohort more readily accepting the coaching app and generally being more trusting of the technology.

Enhancement of self-resilience

The findings from the quantitative analysis demonstrated that the app did appear to alter the self-resilience of participants over the intervention period, where the majority (80%) of participants’ self-resilience improved, with a large effect size (r=.61). The qualitative findings also supported the hypothesis that an app could enhance self-resilience over the intervention period. Statements of hope, positivity and motivation were more frequently shared by the participants at T2, expressions that suggested their emotional robustness had been enhanced between T1 and T2. These converged findings are supportive of each other and suggest that the participants gained emotional stamina over the intervention period.

The app appeared to change the narratives of individuals that had a negative recollection of a situation, asking them to look at it from a different perspective. This seemed to feed into a change in attitude and greater self-confidence.
Users interactions with WYSA over the period of study seemed to show that they found new resources within themselves that enabled them to feel more comfortable when daily challenges presented themselves.

“When challenged by my supervisor, over the eight-week period, I felt more able to receive the feedback she gave me.&rdquo

“I feel strangely more able to take constructive feedback because I used WYSA as a reflective tool which allowed me to internalize my own thoughts about difficult situations.”

“I am not saying I have changed per se, but I have found additional resources in myself that I uncovered / accessed”

“Being new to my position only four months in. I’m not sure if it’s my own abilities or WYSA’s effects, or a combination of the two, but I do feel more confident now than I did before – perhaps it’s me just getting used to the job, but I liked to think WYSA helped a bit”

By way of a graphical summary of the study’s findings, and perhaps, for some readers, a more cogent representation, Figure 2 identifies the positive effect on the volunteers’ self-resilience and, in contrast, the effect on the working alliance between the coaching app and user over the intervention.

Figure 2: Graphical representation of Quantitative findings

Discussion, Conclusions, Implications

A Working Alliance between Coaching App and Coachee is not a predictor of positive outcomes

A structured personal relationship between a human coach and coachee is seen as a requirement (Walton, 2014). Central to coaching models, from therapeutic to performance, requiring different levels of emotional engagement, is that a working alliance is an important element to achieve successful outcomes. Sun et al. (2013) suggest that different coaching assignments require appropriate levels of connection to be established. The research findings question this premise in that the technology appeared to enhance an individual’s self-resilience, even though no collaborative working alliance, in the traditional sense, was developed, (Bordin, 1979).

The findings also support other research findings in the fields of counselling and therapy (Klein et al., 2013) that have found positive outcomes in the adoption of these digital mechanisms that seek collaboration with their users.

Creation of a non-judgemental safe space

The notion that the app created a safe space for individuals to express their feelings is intriguing. Safety, in this context, was perceived as using language that could be attributable to a coaching relationship. Core to the principles of coaching is the creation of a non-judgemental relationship and, seemingly, several individuals found that the virtual environment of using the app was a place to share, with one individual referring to the app as a “her”, characterising the technology as a confidante who she would share feelings with about her friends and family. The notion of bonding was explored with this individual and the results suggested that elements of trust and confidence appeared genuine. This acceptance of alternative methods of coaching appears to support previous research where no differentiation in outcomes was found using different technological delivery platforms (Kamphorst et al., 2014).

In addition, other participants discovered a safe space to explore their innermost feelings, echoing similar feelings of familiarity with technology as revealed by Nass and Moon (2000) in their research. They revealed that human beings have a concept of AI as an entity; they give it a name and attribute it meaning along with neighbouring forms of technology. Exploring how WYSA responded in these sensitive situations, some interviewees suggested that because the conversations were with an artificial entity, there was a perception of safety and security. This implied that they felt they were not being judged by another person. The study found this description of chatting to a non-human being non-judgemental was a common theme, allowing users to avoid the fear of being assessed negatively by another individual. This aspect of how the users perceived the dialogue, passive yet secure, emerged more strongly at T2, and it could be argued that this non-judgemental environment could develop into a bond with the technology.

The International Federation of Coaches considers creating a non-judgemental coaching dialogue to be a core competency of a coach, allowing “the client to vent or ’clear’ the situation” (Federation, 2019). The comparison between comments made at T1 and T2 found that individuals, as they engaged with the technology, increased their willingness to share all manner of work and life challenges.

Implications for the coaching field

The study’s findings suggest that coaches might consider app technology as an opportunity and not a threat. The interviews suggest that the relationship between the app and the users was of a transactional nature. However, it was found that WYSA did enhance users’ self-resilience. It could be argued, with a degree of evidence from the interviews, that the users found the conversations stilted and limited as the algorithms behind the chatbot are data-mining in a shallow seam of responses. Nonetheless, the app appeared to make a difference, and coaches could start to research how these forms of technology could be deployed in their own practice.

The following areas should be explored further by coaches wishing to add to their portfolio of activities:

  • An opportunity for augmentation: The augmentation of a coaching experience by employing aspects of these digital learning processes under the supervision of a coach could provide enhancement of traditional coaching. The findings suggest that the technology could be deployed now. Coaches could supervise and oversee interactions between the coaching app and their clients.
  • 24/7 accessibility: The accessibility on a 24/7 basis was a feature of the coaching app that users found appealing.
  • Safe-space: the intimate environment established between coachee and technology may be a useful reflexivity tool that coaches could suggest their clients use.
  • Threats: However, there is a threat from these forms of technology. If the coaching field ignores the possibilities of artificial intelligent proxy agents, the technology will most probably be advanced by neighbouring disciplines of professionals in the human resource sector and/or by technologists.

Future Research

Given the apparent benefits of the adoption, augmentation and delivery of coaching services through technological means, it is likely that the discipline will follow the same path as that of the digitisation of behavioural health medicine. Arigo et al. (2019) argued that technology could be further leveraged to advance interventions to promote healthy behaviours and suggested that this would be best achieved through industry and academic partnerships. The authors suggested that there was a risk to the wider behavioural science community if it failed to keep pace with fast-paced emerging technological advances, and highlighted areas where some much-needed research was required. This thesis supports this contention and voices a similar warning. It would seem an appropriate reaction to these emergent innovations that the coaching fraternity investigates how these forms of technology can augment traditional coaching practices.

The technology presents an opportunity to the coaching field to offer a blended approach to traditional face-to-face coaching and an artificial agent. The study suggests that chatbots can engage coachees in a virtual safe space where they can reflect on issues in a non-judgemental environment. Further research could explore whether coaches could assign activities and tasks to coachees, asking them to log them with the chatbot. As tested in this study, the technology has the functionality to register these actions and periodically remind the coachee to reflect on how their conduct is aligned with those recommendations. This harnessing of technology to perform complementary functions may enhance the quality of a coaching assignment and improve the coachee's positive outcomes. However, with every opportunity, there are threats, such as the risks associated with the concept of the probity of human-computer interactions.

There are a number of ethical dilemmas revealed by this study that have resonance with wider human to machine interactions. The study identified the high levels of trust users place in technology, considering they are willing to share personal information. This phenomenon of trusting in computers by ascribing human qualities to them was first suggested by Nass and Moon (2000) and has led to the creation of the computer as social actors paradigm where human interactions with computers are seen through a social prism. The importance of trust that humans seemingly infer on their interactions with intelligent machines has been well explored (Lee and Moray, 1992; Muir, 1987). Some consider the work by Muir as ground-breaking in shifting our understanding of the trust between humans and machines (Khasawneh et al., 2003).

Disturbingly, as these technologies are being developed, technologists are creating the algorithms to ape the coaching process, where no ethical framework exists to guide the potency and application of these digital agents. These considerations are being actively debated in neighbouring fields as the potential for mobile health apps are being explored. Similar ethical dilemmas are faced; however, no specific policies have been forthcoming to address these concerns in the mental healthcare arena (Jones and Moffitt, 2016).

Computer coders designing AI and machine learning systems are being pressed to incorporate ethical design techniques in their formulation of digital codes (Rantavuo, 2019). Nonetheless, high-profile lapses[1] have occurred, showing the bias that can be innocently created by more general societal prejudices.

Urgent research is needed to explore the possible need for a code of practice and the establishment of a professional body that may be given oversight of the development of these artificial coaching agents.


Endnotes

[1]

The algorithm within Google’s image search.

References

Alvey, S. and Barclay, K. (2007) 'The characteristics of dyadic trust in executive coaching', Journal of Leadership Studies, 1(1), pp.18-27. DOI: 10.1002/jls.20004.Arigo, D., Jake-Schoffman, D.E., Wolin, K. and et al, (2019) 'The history and future of digital health in the field of behavioral medicine', Journal of behavioral medicine, 42(1), pp.67-83. DOI: 10.1007/s10865-018-9966-z.Kamphorst, B.A., Klein, M.C.A. and van Wissen, A. (2014) Human Involvement in E-Coaching: Effects on Effectiveness, Perceived Influence and Trust. 5th International Workshop on Human Behavior Understanding, 12 September 2014, Zurich, Switzerland. DOI: 10.1007/978-3-319-11839-0_2.Bluckert, P. (2005) 'Critical factors in executive coaching - the coaching relationship', Industrial and Commercial Training, 37(7), pp.336-340. DOI: 10.1108/00197850510626785.Bordin, E.S. (1979) 'The generalizability of the psychoanalytic concept of the working alliance', Psychotherapy: Theory, Research & Practice, 16(3), pp.252-260. DOI: 10.1037/h0085885.Braun, V., Clarke, V., Hayfield, N. and Terry, G. (2018) 'Thematic analysis ', in Liamputtong, P. (eds.) Handbook of research methods in health social sciences. Springer.Datta, L.E. (2001) 'The wheelbarrow, the mosaic and the double helix: Challenges and strategies for successfully carrying out mixed methods evaluation', Evaluation Journal of Australasia, 1(2), pp.33-40. DOI: 10.1177/1035719X0100100210.Core Competencies (nd).Fiske, A., Henningsen, P. and Buyx, A. (2019) 'Your Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology, and Psychotherapy', Journal of Medical Internet Research, 21(5). DOI: 10.2196/13216.Ghods, N. (2009) Distance coaching; The relationship between the coach-client realtionship, client satsifaction and coaching outcomes [Unpublished doctoral dissertation]. San Diego, CA: Alliant International University.Grist, R. and Cavanagh, K. (2013) 'Computerised Cognitive Behavioural Therapy for Common Mental Health Disorders, What Works, for Whom Under What Circumstances? A Systematic Review and Meta-analysis', Journal of Contemporary Psychotherapy, 43(4), pp.243-251. DOI: 10.1007/s10879-013-9243-y.Hawkins, P. and Smith, N. (2014) Within The Complete Handbook of Coaching, pp.228-240.Hawley, S.H. (2019) 'Challenges for an Ontology of Artificial Intelligence', Perspectives on Science and Christian Faith, 71(2), pp.83-95.Hayes, J. (2020) 'AI-powered therapy sets minds at rest', Engineering & technology, 14(6), pp.46-49.Horvath, A.O. and Greenberg, L.S. (1989) 'Development and validation of the Working Alliance Inventory', Journal of Counseling Psychology, 36(2), pp.223-233. DOI: 10.1037/0022-0167.36.2.223.Jones, N. and Moffitt, M. (2016) 'Ethical guidelines for mobile app development within health and mental health fields', Professional Psychology: Research and Practice, 47(2), pp.155-162. DOI: 10.1037/pro0000069.Kamphorst, B.A. (2017) 'E-coaching systems: What they are, and what they aren’t', Personal and Ubiquitous Computing, 21(4), pp.625-632. DOI: 10.1007/s00779-017-1020-6.Kamphorst, B.A., Klein, M.C.A. and van Wissen, A. (2014) Human Involvement in E-Coaching: Effects on Effectiveness, Perceived Influence and Trust. 5th International Workshop on Human Behavior Understanding, 12 September 2014, Zurich, Switzerland, pp.16-29.Khasawneh, M., Bowling, S., Jiang, X. and et al, (2003) A model for predicting human trust in automated systems. 8th Annual International Conference on Industrial Engineering – Theory, Applications and Practice, 10-12 November 2003, Las Vegas, USA.Klein, M., Mogles, N. and van Wissen, A. (2013) 'An Intelligent Coaching System for Therapy Adherence', Pervasive Computing, IEEE, 12(3), pp.12-3. DOI: 10.1109/MPRV.2013.41.Lee, J. and Moray, N. (1992) 'Trust, control strategies and allocation of function in human-machine systems', Ergonomics, 35(10), pp.1243-1270. DOI: 10.1080/00140139208967392.Muir, B.M. (1987) 'Trust between humans and machines, and the design of decision aids', International Journal of Man-Machine Studies, 27(5-6), pp.527-539. DOI: 10.1016/s0020-7373(87)80013-5.Nass, C. and Moon, Y. (2000) 'Machines and Mindlessness: Social Responses to Computers', Journal of Social Issues, 56(1), pp.81-103. DOI: 10.1111/0022-4537.00153.Naswall, K., Kuntz, J. and Malinen, S. (2015) Employee Resilience Scale (EmpRes) Measurement Properties. Available at: http://hdl.handle.net/10092/9469.O'Broin, A. and Palmer, S. (2010) 'Exploring key aspects in the formation of coaching relationships: initial indicators from the perspective of the coachee and the coach', Coaching: An International Journal of Theory, Research and Practice, 3(2), pp.124-143. DOI: 10.1080/17521882.2010.502902.O'Broin, A. and Palmer, S. (2009) 'Co-creating an optimal coaching alliance: A cognitive behavioural coaching perspective', International Coaching Psychology Review, 4(2), pp.184-194.Spaten, O.M., O'Broin, A. and Løkken, A.O. (2016) 'The Coaching Relationship - and beyond', Coaching Psykologi: The Danish Journal of Coaching Psychology, 5(1), pp.9-16. DOI: 10.5278/ojs.cp.v5i1.1682.Peck, D.F. (2010) 'The therapist-client relationship, computerized self-help and active therapy ingredients', Clinical Psychology & Psychotherapy, 17(2), pp.147-153. DOI: 10.1002/cpp.669.Rantavuo, H. (2019) Designing for intelligence: User-centred design in the age of algorithms. Proceedings of the 5th International ACM In-Cooperation HCI and UX Conference, April, 2019, Jakarta, Indonesia, pp.182-187.Schwab, K. (2016) The Fourth Industrial Revolution: what it means, how to respond. Geneva, Switzerland: World Economic Forum.Sun, B.J., Deane, F., Crowe, T. and , et al (2013) A preliminary exploration of the working alliance and 'real relationship' in two coaching approaches with mental health workers. Wollongong, Australia: University of Wollongong.Terblanche, N. (2020) 'A design framework to create Artificial Intelligence Coaches', International journal of evidence based coaching and mentoring, 18(2), pp.152-165. DOI: 10.24384/b7gs-3h05.Turing, A.M. (2009) 'Computing machinery and intelligence', in Parsing the Turing test. Springer, pp.23-65.Walton, M. (2014) 'Coaching and Mentoring at Work: the Relationship is the Key', Industrial and Commercial Training, 46(6), pp.345-348. DOI: 10.1108/ict-06-2014-0039.

About the author

Dr Kevin Ellis-Brush an Executive Coach who helps corporates to democratise coaching through system solutions. He is also an experienced business coach to many and has many years as a commercial director. His passion for coaching, combined with an interest in technology and machine learning has converged of late.

Details

  • Owner: Hazel King
  • Collection: IJEBCM
  • Version: 1 (show all)
  • Status: Live
  • Views (since Sept 2022): 460