International Journal of Evidence Based Coaching and Mentoring
2020, S14, pp.3-18. DOI: 10.24384/prb0-s320

Academic Paper

Why Evaluation of Leadership Coaching Counts

Mark Jamieson (University of Chester)
Tony Wall (International Centre for Thriving)
Neil Moore (University of Chester)

PDF

Introduction

A recent EMCC Research Policy and Practice Provocations Report (Wall, Jamieson, Csigás & Kiss, 2017), focused on coaching perspectives of evaluation and revealed a lack of commitment to evaluation from practitioners. These findings were echoed, from an organisational perspective, by a recent stimulus paper (Harding, Sofianos & Box, 2018), presenting a series of provocations challenging current thinking around coaching evaluation, including low levels of engagement and systems that potentially distort the value of coaching. This research explores the evaluation environment, characterised by disinterest or lack of ambition, and presents ideas for a model that is simultaneously operable and strategic. It describes the current practice environment; examines evaluation through a strategic research lens of ambidexterity (felt to be representative of day-to-day leadership complexity); and, from an analysis of resultant data, produces new evaluation moderators with positive implications for practice.

The extent to which organisations dismiss evaluation as either problematic or strategically irrelevant, is highlighted in a survey by the Chartered Institute of Personnel and Development (CIPD, 2015). The report highlights the increasing strategic reliance placed on leadership coaching to deliver emergent organisational goals, and the accompanying financial investments, and the report concludes:

  • One in seven organisations do not evaluate their coaching initiatives
  • Over a third of organisations limit their evaluation to the satisfaction levels of those taking part
  • Only one in five organisations assess the transfer of learning into the workplace
  • Only a small minority of organisations evaluate the wider impact on the business or society.

These findings by the CIPD provide practical evidence of the contradictory environment for evaluation, characterised by two paradoxes. The first is that evaluation of leadership coaching is of low strategic significance, despite the prominence given to leadership coaching as a strategic intervention. Secondly, those leadership outcomes organisations espouse as being most valuable, including agility, adaptability, and soft skills for a culture of trust and behavioural integration, are not what they set out to evaluate. In other words, as a new set of organisational and leadership outcomes have evolved from an increasingly complex operating environment (Hatum, 2010), evaluation has stood still.

This is reflected when examining current evaluation methodologies in practice. In 2013, a CIPD survey reported more than half of respondents used a variation of Kirkpatrick’s four levels model for evaluation (assessment at reaction, learning, behaviour and outcome levels), and the majority measured no further than the initial, reaction level. Although Kirkpatrick’s model was formulated in 1959, it evidently continues to resonate with practitioners, despite claims that it is a model for a different time, un-adapted to a new set of leadership outcomes focused on long term intangible behaviours (Wall et al, 2017; Kaufman, 2015; Beer, 2015).

Therefore, the implication for evaluation is that the gap between theory and practice is widening and current strategic thinking around emergent leadership outcomes is exacerbating historical problematics of intangibles, strategic alignment and contingencies (Wall et al 2017). To address the evident organisational lack of engagement with evaluation, this research concurs with the criticism that stakeholders are not well served by scholastic or practitioner research (Beer, 2015), and resultant complex methodologies (Wall, Iordanou, Hawley & Csigás, 2016) produce irrelevant or tendentious data. In response, this research adopted a lens of organisational ambidexterity, argued to be representative of emerging leadership outcomes and coaching dimensions, characterised by balanced decision-making between short term imperatives and long-term vision (O’Reilly & Tushman, 2004; 2013).

From this strategic context, qualitative data was gathered through in-depth interviews with 12 senior organisational stakeholders, revealing three key anomalies: evaluation contradicts organisational strategy; metrics conflict with value outcomes; and evaluation is regarded as having low strategic value. More generally, ambidexterity opened up opportunities to explore existing strategies and their implications for practice. In doing so, this research demonstrates that extant frameworks provide structure and definition to emergent leadership outcomes, with the potential to isolate their contribution to a primary goal and negating historic problematics. The research highlights how further exploration of existing strategies, such as leadership competency models and leadership behavioural frameworks, refined understanding of evaluation influencers and produced four evaluation moderators, to contribute new insights into practice, reimagine design and suggest a wider strategic role for evaluation.

Methodology

March’s definition of ambidexterity (1991), as the fundamental adaptive challenge for leaders to simultaneously exploit current capabilities and existing assets, while intentionally making time for exploration for competitive relevance in dynamic markets, provides the context for this research. An inductive investigation was undertaken, adopting an interpretivist approach, collecting and analysing qualitative data to develop knowledge and theory about leadership coaching evaluation practice. 

Table 1: Research conceptual framework for ambidextrous balance derived from literature 

Exploitative ambidextrous characteristics: Short term - known outcomesExplorative ambidextrous characteristics: Long term - unknown outcomesAmbidextrous balance propositionEmergent leadership coaching ambidextrous dimensions from the literature to develop approaches for:Implications for dimensions of evaluation
Financial risk - pressing business imperatives with strong financial legitimacy (Beer, 2015)Cultural risk - long term vision through innovation, creativity and capacity building with weak financial legitimacy (Thunnissen et al 2013)Performance versus future competitive relevanceManaging risk (Beer, 2015)Balancing performance with competitive relevance in the context of managing risk
Respond - Operate in a complex but known environment (Hatum, 2010)Anticipate - Operate in a complex but unknown environment (Hatum, 2010)Complex environment versus opportunity generationManaging change (Ely et al, 2010; Hall et al, 1999)Balancing a complex environment with opportunity generation in the context of managing change
Scalability (Ericksen & Dyer, 2005)Capacity building (Cappelli, 2008; Lepak & Shaw, 2008)Efficiency savings versus talent management for pivotal high impact rolesManaging multiple conflicting structures and systems (Fernandez-Araoz, Groysberg, Nohria, 2011; Wright & McMahan, 2011)Balancing efficiency savings with talent management for pivotal roles in the context of managing multiple structures
Allocation of resources to areas of high impact (O’Reilly & Tushman, 2013; Yapp, 2009)Allocation of resources to areas of future development (O’Reilly & Tushman, 2013; Yapp, 2009)Budgetary justification versus culture of collective understandingStrategic alignment (CIPD, 2014)Balancing internal budgetary justification with a culture of collective understanding in the context of managing strategic alignment
Recruitment and retention (Cappelli, 2008; Michaels et al, 2001)Succession (Ely et al, 2010)Management versus leadershipSocial integration and relationship management (Lewis & Heckman, 2006; Skillings, 2008)Balancing management with leadership perspectives in the context of managing social integration and relationships
Cost – training (Angrave et al, 2016; Kaufman, 2015)Investment – coaching (Becker, Huselid & Beatty, 2009; Wright & McMahan, 2011)Return on Investment versus wellbeing and engagement

Coaching efficacy versus awareness of fuller range of coaching outcomes
Professional judgement (Wall et al, 2017)Balancing Return on Investment with wellbeing and engagement in the context of managing professional judgement Balancing coaching efficacy with a fuller range of coaching outcomes

The data collection comprised a series of semi-structured interviews and was conducted with a small purposive sample of 12 senior organisational evaluation stakeholders, across private, public and third sectors. Under investigation were their perspectives on evaluation (operational or strategic), leading to in-depth discussions around individual experiences.

There were three broad research questions:

  1. What are the experiences of those making judgements on leadership coaching evaluation in terms of exploitative and explorative outcomes?
  2. What are the problematics to evaluation and the implications of the emerging context, characterised by organisational ambidexterity, for future research and development?
  3. How might a framework that places ambidextrous balance propositions in the context of leadership coaching dimensions support practitioners in undertaking evaluation?

Semi-structured interviews provided the flexibility to develop a set of ‘grand tour’ (deliberately generic) and ‘mini tour’ (open-style prompts) questions (Spradely & McCurdy, 1972) as data emerged. Once data had been collected, the research followed the main stages of a qualitative approach (Bryman & Bell, 2015) including familiarisation, interpretation and conceptualisation, with data connecting themes across an ambidextrous framework, derived from a review of literature (Table 1). From this process, insights for translational research data (Woolf, 2008), to produce actionable knowledge, emerged, specifically focused on the third broad research question and discussions around the potential usefulness of an ambidextrous evaluation framework for practitioners.

Emerging data was analysed in key stages including reflection, conceptualisation, coding and re-coding, linking, and re-evaluation (Easterby-Smith, Thorpe, Jackson & Lowe, 2012). Interviews were audio recorded, allowing the documentation of not only what participants said, but how they said it; and enabling active listening through minimising note taking and developing emerging prompts to facilitate in-depth discussions. It was also considered valuable to record and transcribe interviews as part of the process for reliability, validity and generalisability, specifically in terms of the professional judgement of the researcher/interviewer in the context of ‘anecdotalism’ (Silverman, 2006) and interpretation of data.

For interpretation, data was processed through three cycles of coding to produce analytical categories and themes. Due to the size of the sample, experienced-informed subjectivity (Stokes & Wall, 2014) was used to analyse cultural context and meaning participants placed on evaluation, simultaneously enabling comparisons across sectors. The first cycle involved open coding as the preliminary procedure to break down interview transcripts to capture the narrative flow of what had been said. Resultant data was included in preliminary codes, determined with reference to the three broad research questions, linked to the generic semi-structured interview inquiries, and the conceptual framework to ensure adherence to the research focus (<b>Figure 1</b>). Preliminary codes produced broad labels across cultural and operational contexts ranging from sector and size, to leadership hierarchies and cultural dimensions. These were then categorised into secondary codes and basic themes as part of the second cycle of coding: the axial phase. These secondary codes began to seek out relationships and interconnections (Stokes & Wall 2014), linking codes to contexts, outcomes, patterns of implementation and causes. Experienced-informed subjectivity was again used, with reference to the research conceptual framework, to make decisions about classification into basic themes. From these, three organising themes emerged as part of the third cycle:

  1. Evaluation influencers
  2. Implications for evaluation practice, present and future
  3. Implications for evaluation data

Figure 1: Process of retrieval of organising themes from data

At the conceptualisation stage, emerging themes were connected to extant knowledge by overlaying the data analysis findings onto the conceptual framework (Table 1). Throughout this process, data transcripts were revisited as the direction of the research developed to seek out additional contributions to emergent key concepts, constantly linking back to extant knowledge, to compare and contrast with emergent data. The final, conclusions stage (Quinlan, 2011) was found to be an opportunity to continue to purposefully relate the data to extant knowledge, as well as a period of further reflection, to re-evaluate data, identifying and explaining anomalies and contradictions.

Findings

Ambidexterity was primarily selected as a lens for research; however, the data from the semi-structured interviews unexpectedly revealed participants recognised ambidexterity positively, as a strategic opportunity to deliver an adaptive environment for high performance in uncertain times. Derived from literature in this field, the research conceptual framework (Table 1) was found to accurately reflect current leadership decision-making in reality, and participants were familiar with the process of making connections between outcomes across the six suggested coaching dimensions for leadership development (managing risk; change; multiple structures; strategic alignment; social integration; professional judgement). In addition to identifying new leadership goals, ambidexterity produced themes and sub-themes which are set out below (Figure 2). Organising themes from interpretation of data, when analysed, produced three corresponding headline themes: evaluation anomalies, ambidextrous strategies and ambidextrous moderators.

Figure 2: Summary of themes and sub-themes emerging from data

Theme 1. Evaluation anomalies

The data revealed unique experiences of ambidexterity reflecting organisational context and primary goals. Primary goals, defined as “the main business” of the organisation, were found to be highly influential on evaluation. These goals fell into three main stakeholder categories: shareholders (financial performance); public (operational performance); and societal (purposive performance). The different interpretations and resultant data from these sources produced a number of unique contradictions that were insightful when compared and contrasted to responses across the research sample. When questioned in terms of organisational and emerging contexts, contradictory responses were prevalent, uncovering three key anomalous sub-themes: what participants said is not always what they did in practice; what was stated as being of value was not necessarily being evaluated; and although ambidextrous balance was found to be a high strategic priority, evaluation was considered as being of low strategic value.

Sub-theme: Evaluation contradicts organisational strategy

Despite stated intentions to promote emerging explorative leadership outcomes and coaching dimensions, in practice evaluation systems had not simultaneously evolved and participants continued to emphasise short term performance targets contradicting longer term goals set within organisational strategy. Participants made strong verbal statements of intent with regard to emerging leadership expectations, asserting the strategic contribution of ambidextrous outcomes.

Our leaders need to own that tension – between long term, building capacity, and delivering today. Despite intense pressure, you need to develop the agility and judgement to switch focus in the moment. (Director of People and Change, high accountability public sector organisation)

Verbal commitments were underpinned with deliberate efforts to practically explain explorative and exploitative contributions to an overarching strategy, through various internal frameworks. These frameworks were designed to identify new leadership competencies, however, they were implicitly used by participants to make sense of coaching dimensions focused on explorative behaviours, by connecting them to exploitative targets, or vice versa, in the context of primary goals. The strategic frameworks that made ambidextrous outcomes relatable to leaders were reportedly undermined by their subsequent alignment to the organisational context, characterised by performance management systems emphasising a dominant primary goal or stakeholder, which continued to focus on short term performance targets. Therefore, the fundamental contradiction between leadership strategy and evaluation focus reflected the inability of participants to simultaneously adapt the organisational context to keep pace with the strategic context.

Sub-theme: Evaluation metrics conflict with value outcomes

The second key anomaly was that participants were not pursuing evaluation of those outcomes they stated as being valuable. Leadership coaching was perceived by participants as a key resource to deliver critically strategic explorative behaviours such as innovation, creativity and capacity building:

We want our leaders to develop the ability to think laterally, to inspire and motivate their people to be great – differently, otherwise we will struggle to convince our clients we’ll do a job better than our competition. (CEO, private sector financial services organisation)

Simultaneously, it was found participants’ thinking on evaluation was not aligned to those leadership behaviours and intended outcomes, they stated as being highly prized, evidenced by the continuing measurement of leadership coaching against exploitative targets, inextricably aligned to reward and recognition systems.

Evaluation is linked closely to the bottom line, whatever we say, financial compensation is what our leaders are focused on and the system recognises that. (CEO, private sector financial services organisation)

The data revealed that a close alignment to reward and recognition systems meant evaluation criteria was vulnerable to being misaligned with strategy. This was specifically observed in the organisations where leaders were promoted through financial performance and had discretionary judgements, as the leadership coaching stakeholder, over evaluation outcomes and their relationship to organisational goals. Therefore, although the emerging context provided participants with a strategic platform to recognise valuable intangible outcomes, respondents failed to connect new leadership behaviours to evaluation, where it was often limited to operational tangible metrics of low strategic value.

Sub-theme: Low strategic value placed on evaluation

Having established the high strategic priority placed upon ambidexterity, data revealed evaluation, in contrast, to be strategically unambitious. The findings show that the ways in which evaluation data was used fell into four categories: reward and recognition; programme maintenance; operational; and strategic, producing clear, and arguably predictable, distinctions between private, and public and third sectors. Specifically, whereas the category for reward and recognition was the significant focus for the private sector, public and third sectors were predominately focused on operational targets. Figure 3 summarises all participants’ responses with regard to stated priorities for evaluation data, providing a general overview (across all sectors), highlighting the lack of strategic focus placed on evaluation by organisations.

Figure 3: Summary of participant responses for stated priorities for evaluation data focus and key

Theme 2. Ambidextrous strategies

Findings showed that ambidexterity was used by organisations to define expected leadership outcomes; however, participants did not consider using emergent definitions as part of their evaluation strategy. The research examined existing ambidextrous frameworks, their potential impact on the anomalous and problematic environment (specifically, strategic alignment and intangible outcomes), and the findings show that this was a missed opportunity for stakeholders to raise the profile of evaluation as a potential informant of the wider strategy. These frameworks were designed to assist leaders to make decisions, chain-building value across ambidextrous outcomes to achieve specific targets. Despite serving to clarify emerging leadership outcomes, intentionally aligning them to performance management systems, all models stopped short of including evaluation.

Sub-theme: Strategically aligned evaluation

Strategic alignment has historically perplexed evaluation scholars and practitioners (Kaufman, 2015; Beer, 2015); however, this research asserts organisations have gone a long way to deliberately align emerging leadership outcomes with overarching strategies, and all that remains is to connect these models to evaluation. Various strategies producing frameworks referencing leadership outcomes were found to be in place in all participating organisations.

All versions reflected participants’ strategies to keep pace with emerging context and shifting attitudes to accommodate the next generation of leaders. The reported significance of these frameworks was to encourage change by legitimising and explaining, either exploitative or explorative, seemingly incompatible strategies, as a way of achieving primary goals. However, the research revealed, despite continuing to assert the importance of strategic alignment (specifically outcome impact isolation and measurement of intangibles) problematic, participants did not consider connecting these frameworks to evaluation.

Sub-theme: Defining intangibles

Clearly defining intangibles has been reported as another persistent problem area for evaluation. In terms of ambidexterity, intangibles are conveniently classified in this research as explorative outcomes and tangibles as exploitative. The data showed extant leadership competency frameworks effectively defined ambidextrous outcomes, demystifying explorative behaviours by placing them in the context of their contribution towards a primary goal. The findings also show that frameworks were used to link tangible and intangible leadership outcomes, through a taxonomy of ambidextrous behaviours, providing clarity about explorative leadership outcomes by connecting them to commerciality or purpose, and as formal components in systems for performance management.

Participants reported categories such as: Good citizen, Custodian, Trusted partner, Value added, and Servant-hearted, as designations for explorative outcomes as part of the process of legitimisation and to provide focal points for assessment. They were also found to provide a strategic explanation for leaders by placing outcomes in an operational context and, in doing so, going some way to demystifying the idiosyncratic nature of leadership coaching, commonly described by scholars as a barrier to evaluation (Ely et al 2010). However, it was also found, practical definitions of intangible outcomes were not extended to evaluation metrics.

Theme 3. Ambidextrous moderators

The final theme was developed from participant responses focused on evaluation data, to elicit new knowledge about stakeholder motivation and limited expectations. It uncovered moderators of evaluation emerging from experiences of ambidexterity, facilitating a deeper understanding of barriers and their causes and, in doing so, begins to address the third broad research question about the potential strategic usefulness, and motivational powers of ambidextrous frameworks in the context of evaluation.

When questioned, organisations reported a lack of interest in evaluation, due to one or a combination of factors, including: size, internal context, and cultural dimensions. When specifically discussing barriers, evaluation moderators began to emerge, revealing a more nuanced definition of problematics, including: confused ownership and responsibility for evaluation; evaluation processes limited between Kirkpatrick’s levels 1 to 3 (reaction, learning and behaviours); evaluation complexity (timelines, isolation and monetisation of intangible outcomes); and lack of confidence in the competency and judgement of those evaluating.

Sub-theme: The four moderators of evaluation

As part of the investigation into anomalies, evaluation procedures were examined in the context of emerging moderators. It was found organisations adopted three distinct approaches to evaluation: operational (Human Resources), strategic (Executive) and multiple stakeholders (collaborative). Each was found to have a different interpretation of ambidexterity with varying implications for evaluation.

Within these approaches the data produced four key influencing moderators (Table 2): primary goals (as an evaluation influencer); evaluation status (referring to formal or informal approaches); a multi-generational leadership pool (recognising the challenges of a natural generational shift in the workplace); and organisational context (the evaluation “culture” of the organisation, including attitudes, accountability and implications of those responsible for evaluation judgements). These moderators were distinct from the evaluation influencers in the organising theme, which set out to provide a basis for the state of practice, as they sought to develop data at a higher level to explore the opportunities presented through ambidexterity, and whether these would encourage stakeholders to raise evaluation to an organisational priority.

Table 2: The four moderators of ambidextrous leadership coaching evaluation: dimensions and implications

Moderators of evaluation (Emerging from data and categorised by this research)Moderator dimensions (Emerging from data)Example data: Reported implications and barriers to evaluation
Primary goalsFinancial performance (profit)
- Focus on financial self-interest
- Elevation of exploitative outcome leaders
- Resistance to evaluation of explorative outcomes
- Biased judgments towards exploitative outcomes
Financial performance (account)
- Focus on short term imperatives
- External context
- Leadership development timelines
Operational performance
- High levels of accountability
- Events and incidents
Purposive performance
- Employee motivation and engagement
- Positive recognition of measurable subordinate strategic outcomes
- Intangible outcomes 
- Collective understanding
Evaluation statusEvaluation stakeholder: Executive or Human Resources- Misaligned outcomes


Formal:
- As part of a reward and recognition system
- As part of a performance management system
- Inconsistent judgments across internal departments
- Strategic disconnect between Human Resources and the Executive
Informal:
- Lack of accountability
- Data collection limited to Kirkpatrick levels 1 & 2
- Coaching efficacy
- Limited anecdotal data
- Limitations of discretionary evaluation
Multigenerational leadership pools- Flawed succession strategy
- Emerging leadership behaviours and values
- Conflicting expectations
- Unreliable data
- Leadership transition
- Timelines and opportunities
Organisational context- ‘Disinterested’ evaluation stakeholders
- Leadership coaching stakeholder
- “Cultural’ fit for evaluation
- Inconsistent data from discretionary sources
- Competency and dissemination
- Non-progressive approach to evaluation

Primary goals

As a moderator, the data revealed primary goals had a significant influence. Despite this study’s finding ambidexterity was perceived as an advantageous layered, complex collaboration of exploitative and explorative outcomes towards an overarching target, primary goals were found to elicit a one-dimensional approach to evaluation. Primary goals were found to be the dominant influence on evaluation metrics, often overriding and contradicting the stated outcome emphasis of respondents. All data from private sector participants reported primary goals for organisational profit, inextricably linked to individual financial interest. This was found to create a difficult environment for evaluation, with data vulnerable to bias, inconsistency and contradiction.

Leadership coaching evaluation is entirely in the hands of the departmental partner, who has complete discretion, generally aligned to their own perceptions of ‘what counts’, which tends to mean, what is readily understood in financial terms. (Managing Partner, private sector general surveying practice)

In some participating public sector organisations with high accountability operational primary goals, expansive evaluation beyond short term imperatives was reported as ‘a luxury’ afforded by high performance (assessed against short term exploitative targets, shaped by a primary goal).

We have to deliberately carve out time for the long term and are only really comfortable doing that when we have dealt with the short term. (Senior Director, public sector organisation)

In cases where purposive performance was the primary goal (all third sector), there was recognition of the problematic nature of evaluating intangible long-term outcomes and focus was found to be turned inwards on more measurable exploitative targets such as financial accountability and management of resources.

Evaluation status

The data revealed evaluation status, shaped by the evaluation stakeholder and individual interpretations of ambidexterity, was a significant source of misaligned outcomes. The tension between the executive and the HR (Human Resources) function, bringing strategic or operational evaluation perspectives, was made more complex by the adoption of either a formal or informal approach.

Formal status was characterised by being closely aligned to the primary goal, primary goal stakeholder, or internal processes. At the same time, strategic alignment was compromised by being inextricably linked to reward and recognition. In all cases where evaluation status was formal, the approach was operational, as either part of the HR function or through various leadership assessment systems. This resulted in strategic outcomes being either interpreted as operational or emphasising the exploitative targets in the ambidextrous chain.

Additionally, the focus on financial self-interest through performance management systems, integrated as part of the evaluation process, facilitated a further layer of tension between contrasting evaluation stakeholders. Where evaluation was described as informal, characterised by inconsistent discretionary judgements, participants provided justifications, referencing organisational context (in the sense of influencer) in terms of size, culture or governance. The data suggested informal evaluation status also reflected participants’ attitudes to evaluation as a process of low strategic value, specifically where evaluation was implicit.

Multigenerational leadership pools

Generational moderators were reported as multifaceted. Participants universally stated four ambidextrous generational challenges impacting evaluation: succession; retention; values and behaviours; and expectations. When questioned, a third of participants reported succession as a categorical priority. At the same time, the data revealed, one of the most significant leadership challenges participants faced was the perceived vacuum beneath the incumbent executive, as a result of flawed succession strategy and retention. Participants made various reference to the generational ‘character’ of a leadership group that found it difficult to hand over to the next generation, despite stated intentions.

As a generation we are not good at reaching down to the next level for succession. Therefore, evaluation data might be assumed to be vulnerable to distortion and bias. A poor response might be more about the flawed character of the leader rather than the programme itself. (CEO, third sector charity organisation)

At the other end of the succession continuum, half of participants emphasised the need to develop leadership talent to build capacity and retain high potentials. Here, participants reported ‘cultural’ differences between generations, over loyalty, expectations for advancement, organisational values and investments in career development. The data found that conflicting and contradictory attitudes characterised the challenges of a multigenerational workforce for evaluation.

Organisational context

Organisational context emerged from the data as the internal characteristics of the organisation. As a moderator, organisational context, often referred to by participants across the conceptual framework as ‘culture’, was found to be the ‘unique DNA’ of the organisation, shaping and impacting evaluation in terms of consistency of data, competency of data collection and dissemination, and the environment for engaging with evaluation on a wider strategic platform.

When directly questioned about data collection, usage and barriers, organisational context was also found to be a regular response from participants as justification for evaluation limitations, informal approaches, and stances disassociated from the organisational strategy. Cultural dimensions encompassed several unique perspectives, including the influence of the Chief Executive Officer (CEO), cynicism towards a coaching environment and internal governance. The data revealed evaluation stakeholders across all sectors ranging along an operational/strategic continuum. At an operational extreme, evaluation was limited to an HR stakeholder measuring exploitative targets contradicting leadership strategy (expanded in the next section). At the other extreme, judgements for evaluation were made with questionable competency by ‘disinterested’ leaders, or those with financial self-interest, espousing individual perspectives for strategic alignment, hindering transformational strategies.

Sub-theme: Exploring evaluation moderators

The research investigated a number of evaluation approaches to explore how moderators might relate to each other, impact evaluation and provide a deeper understanding of barriers. The following example (Figure 4) of a collaborative evaluation approach illustrates the relationship dynamic between evaluation status and organisational context moderators, and their impact on evaluation, revealing a potential source for inconsistency and distortion through the distinct strategic and operational interpretations for strategy implementation and evaluation.

Figure 4: An example of moderators impacting collaborative evaluation by distorting strategic and operational perspectives

Executive Board (including HR)
Executive Strategy for performanceHuman Resources Director implementation strategy
Mergers & acquisitionsManaging risk (legal)
Anticipate (opportunity generation)Respond to complexity to manage change
Scalability (strategic as part of mergers and acquisitions business)Scalability (operational) to manage multiple conflicting structures and systems
Allocation of resources to areas of high impactDelivering a culture of collective understanding to manage strategic alignment
Capacity buildingRecruitment and retention
Return on InvestmentCoaching efficacy

In this example, where a private sector organisation is focused on mergers and acquisitions as a subordinate strategy, evaluation relies on the strategic interpretation of leadership outcomes by a non-strategic HRD (Human Resources Director). The schematic illustrates the ambidextrous outcomes of executive strategy, interpreted across the conceptual framework, in terms of an HR perspective for implementation. This shows how evaluation status and organisational context moderators impact evaluation, distorting the strategic targets of the executive through the operational focus of the HR function. In this case the strategic disconnect is the cause of the gap between the stated expectations of the organisation and the focus for evaluation. Moderators are seen to combine, creating problems for evaluation by corrupting the process, contradicting rather than serving the stated leadership coaching ambidextrous dimensions.

Emergent moderators produced explanations for the current disengaged state of evaluation practice, as well as a deeper understanding of what organisations needed in practical design of evaluation frameworks. This implied a potentially wider strategic contribution to negate evident disinterest in evaluation and its low priority status. Ambidexterity has elicited data to enrich knowledge and, at the same time, questions whether the current complex environment exacerbates practice challenges or is in fact an enabler for evaluation data collection and usage.

Implications for practice and three ideas for discussion

Findings from this research have implications for practice (organisational and coaching), specifically the practical impact of connecting ambidextrous strategy to evaluation for an integrated framework of strategic interest to stakeholders. These are presented as three ideas for future development. 

Expand existing ambidextrous frameworks to include evaluation of leadership coaching

This idea results from the finding that participants perceived ambidexterity as a strategic choice, making connections across the ambidextrous conceptual framework, building chains between exploitative and explorative targets as part of a subordinate strategy to achieve primary goals. The existing leadership frameworks participants had in place were representative of a new set of expectations, providing definition and structure for coaching dimensions around intangible behavioural goals. Furthermore, ambidexterity was found to successfully isolate the contributions of either exploitative or explorative outcomes by placing them in a strategic context, neutralising two historic barriers: strategic alignment and intangibles. 

At the same time, no participants were found to have extended these various frameworks to include evaluation. Therefore, it is suggested practitioners explore extant frameworks and systems, working with ambidexterity as an enabler for evaluation, rationalising new intangible coaching dimensions by explaining their contribution in the context of a primary goal. 

Place evaluation frameworks in a strategic chain

This second idea links to the claim that organisations are not motivated to pursue evaluation as it is limited both operationally and strategically. Findings around data collection addressed barriers by uncovering the evaluation system needs of participants, contributing insights and unique tools for a workable design, while data usage revealed the possible direction of evaluation to inform wider organisational strategies and raise its strategic value.

To expand the scope of evaluation data, it is suggested participants use ambidextrous frameworks to connect leadership coaching dimensions directly to subordinate strategies and a primary goal. To realise the potential of the evaluation function as an informant to the wider strategy, it is suggested these are positioned in a strategic chain as part of an integrated system. Designed around the unique needs of the organisation, an evaluation model produces data to then inform the delivery of subordinate strategies via ambidextrous frameworks, finally delivering the primary goals at the end of the chain. The suggestion that evaluation is a link in a strategic chain not only implies a wider influence for evaluation data but, as part of a chain, an evaluation framework might act more independently of dominant stakeholders and primary goals. 

Design evaluation systems around a unique inventory of needs to create practical dimensions

Participants welcomed the concept of an evaluation framework not limited to exploitative performance targets, with an expansive remit to inform wider organisational strategy. This third idea concerns the practical dimensions of a system connecting evaluation to strategy, built around ambidexterity. 

This study claims strategic ambidexterity, as the basis of a model for evaluation, is enabling on three levels: to address existing barriers, to alleviate emerging barriers, and to impact positively on limitations. In investigating the failure of participants to expand existing ambidextrous structures to include evaluation, the research examined the moderators for evaluation, systematically investigating problem areas to uncover six new dimensions for barriers: evaluation environment; bias and contradiction; responsibility for evaluation; quality of evaluation data; generational reach; and time management. 

Developed from the participant inquiry, these barriers were reoriented as corresponding practical evaluation needs: legitimisation of contradictory outcomes as part of a strategy for a primary goal; evaluation not distorted by internal reward and recognition systems; a coherent approach combining operational and strategic perspectives; consistency from evaluation practitioners; evaluation that supports succession; evaluation recognising different timelines reflecting the nature of the organisation and internal leadership hierarchies. Informed by an inventory of needs, practitioners have the opportunity to customise the design of any evaluation system emphasising key problematics or targets so that it is relevant, workable, serves the organisational purpose, and is applicable to the unique organisational Return on Investment metric. 

Conclusions

This research acknowledges a number of limitations. Firstly, the sample size was potentially a limiting factor, although it was deliberately confined to senior stakeholders, to open up the decision-making process for insights into real-life organisational dilemmas (Wright, Zammuot, Liesch, Middleton, Hibbert, Burke & Brazil, 2016). Furthermore, it was felt valuable to the research to investigate different contexts to compare and contrast across sectors. Secondly, the research was reliant to a large extent on experienced-informed subjectivity to interpret interview responses in an exploratory study of an anomalous and contradictory environment. Finally, despite emphasising the distinction of examining the subject in a current and relevant context, the research acknowledges the speed of change in the organisational environment and was unable to capture participant responses to reflect events such as Brexit and Covid-19.

All participants reflected that an effective evaluation model was desirable and there was some perplexity that this was an under-resourced area of development and potentially a missed strategic opportunity. As part of the reimagined relationship of the organisation with evaluation, there are also significant implications for coaching practice, where it is envisioned that the coach has the potential to disseminate insights from evaluation data as an external neutral arbiter. This would effectively be an opportunity to limit organisational bias and contradiction, departmental silos and inconsistent judgements arising from succession strategies and HR perspectives that are not strategically aligned.

However, this takes the research focus outside the organisation and raises new questions for future investigation. As moderators provide insights that suggest the coach is well placed to make a wider contribution, with evaluation as the facilitator of rich strategic data; there is also evidence organisations are often reluctant to countenance external input. This raises a wider conceptual question with far reaching implications for the coaching industry, specifically: if a collaborative relationship is unachievable, does this compromise the coaching offering? Without access to the unique organisational context, the coaching intervention is liable to be restricted or limited, and at risk of disappointing all members of the tripartite relationship. This study claims evaluation provides advocacy and the tools for a strong collaborative relationship; in doing so, it asks questions of both organisational stakeholders and coaches who choose to ignore its potential wider strategic contribution.

References

Angrave, D., Charlwood, A., Kirkpatrick, I. and et al, (2016) 'HR and Analytics: Why HR is set to fail the big data challenge', Human Resource Management Journal, 26(1), pp.1-11. DOI: 10.1111/1748-8583.12090.Becker, B., Huselid, M. and Beatty, R. (2009) The differentiated workforce: Transforming talent into strategic impact. Boston, MA: Harvard Business School Press.Beer, M. (2015) 'HRM at a Crossroads: Comments on “Evolution of Strategic HRM Through Two Founding Books: A 30th Anniversary Perspective on Development of the Field”', Human Resource Management, 54, pp.417-421. DOI: 10.1002/hrm.21734.Bryman, A. and Bell, E. (2015) Business Research Methods (4th edn.). Oxford: Oxford University Press .Cappelli, P. (2008) 'Talent management for the twenty-first century', Harvard Business Review, 86(3). Available at: https://hbr.org/2008/03/talent-management-for-the-twenty-first-century.CIPD (2013) 'Chartered Institute of Personnel and Development (Reference 6174 CIPD 2013)', in Annual Survey Report Learning and Development, April.CIPD (2014) 'Chartered Institute of Personnel and Development (Reference 6477 CIPD 2014)', in Annual Survey Report Learning and Development, April.CIPD (2015) 'Chartered Institute of Personnel and Development (Reference 6942 CIPD 2015)', in Annual Survey Report Learning and Development, May.Easterby-Smith, M., Thorpe, R., Jackson, P. and Lowe, A. (2012) Management Research. London: Sage.Ely, K., Boyce, L., Nelson, J. and et al, (2010) 'Evaluating leadership coaching: A review and integrated framework', The Leadership Quarterly, 21(4), pp.585-599.Ericksen, J. and Dyer, L. (2005) 'Toward a strategic human resource management model of high reliability organisation performance', International Journal of Human Resource Management, 16(6), pp.907-928. DOI: 10.1080/09585190500120731.Fernandez-Araoz, C., Groysberg, B. and Nohria, N. (2011) 'How to Hang on to Your High Potentials', Harvard Business Review, pp.76-83. Available at: https://hbr.org/2011/10/how-to-hang-on-to-your-high-potentials.Hall, D., Otazo, K. and Hollenbeck, G. (1999) 'Behind closed doors: What really happens in executive coaching', Organisational Dynamics, 27(3), pp.39-53. DOI: 10.1016/S0090-2616(99)90020-7.Harding, C., Sofianos, L. and Box, M. (2018) Exploring the Impact of Coaching (Stimulus Paper). London: Leadership Foundation for Higher Education.Hatum, A. (2010) Next Generation Talent Management: Talent Management to Survive Turmoil. New York: Palgrave Macmillan.Kaufman, B. (2015) 'Evolution of Strategic HRM Through Two Founding Books: A 30th Anniversary Perspective on Development of the Field', Human Resource Management, 54(3), pp.389-407. DOI: 10.1002/hrm.21720.Kirkpatrick, D. (1959a) 'Techniques for evaluation programs', Journal of the American Society of Training Directors, 13(11), pp.3-9.Kirkpatrick, D. (1959b) 'Techniques for evaluation programs - Part 2', Journal of the American Society of Training Directors, 13(12), pp.21-26.Lepak, D. and Shaw, J. (2008) 'Strategic HRM in North America: Looking to the future', International Journal of Human Resource Management, 19, pp.1486-1499. DOI: 10.1080/09585190802200272.March, J. (1991) 'Exploration and exploitation in organisational learning', Organisation Science, 2, pp.71-87.McDonnell, A. (2011) 'Still Fighting the ‘War for Talent’? Bridging the Science Versus Practice Gap', Journal of Business and Psychology, 26(2), pp.169-173. DOI: 10.1007/s10869-011-9220-y.Michaels, E., Handfield- Jones, H. and Axelrod, B. (2001) The war for talent. Boston, MA: Harvard Business School Press.O'Reilly, C. and Tushman, M. (2004) 'The ambidextrous organisation', Harvard Business Review, pp.74-83. Available at: https://hbr.org/2004/04/the-ambidextrous-organization.O'Reilly, C. and Tushman, M. (2013) 'Organisational ambidexterity: Past, present and future', Stanford Research Paper Series. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2285704.Quinlan, C. (2011) Business Research Methods. London: Cengage Learning.Silverman, D. (2006) Interpreting qualitative data: Methods for analysing talk, text and interaction (3rd edn.). Thousand Oaks, CA: Sage.Skillings, P. (2008) Escape from Corporate America. New York: Ballantine Books.Spradley, J. and McCurdy, D. (1972) The Cultural Experience. Chicago: Science Research Associates.Stokes, P. and Wall, T. (2014) Research Methods. London: Palgrave.Thunnissen, M., Boselie, P. and Fruytier, B. (2013) 'Talent management and the relevance of context: Towards a pluralistic approach', Human Resource Management Review, 23(4), pp.326-336.Wall, T., Jamieson, M., Csigás, Z. and Kiss, O. (2017) Research Policy and Practice Provocations: Coaching evaluation in diverse landscapes of practice – towards enriching toolkits and professional judgement. Brussels: the European Mentoring and Coaching Council.Wall, T., Iordanou, I., Hawley, R. and Csigás, Z. (2016) Research Policy and Practice Provocations: Bridging the Gap: Towards Research that Sparks and Connects. Brussels: the European Mentoring and Coaching Council.Woolf, S. (2008) 'The meaning of translational research and why it matters', Journal of the American Medical Association, 29(2), pp.211-213. DOI: 10.1001/jama.2007.26.Wright, P. and McMahan, G. (2011) 'Exploring human capital: putting ‘human’ back into strategic human resource management', Human Resource Management Journal, 21, pp.93-104. DOI: 10.1111/j.1748-8583.2010.00165.x.Wright, A., Zammuto, R., Liesch, P. and et al, (2016) 'Evidence-Based Management in Practice: Opening Up the Decision Process, Decision – maker and Context', British Journal of Management, 27(1), pp.161-178. DOI: 10.1111/1467-8551.12123.Yapp, M. (2009) 'Measuring the ROI of talent management', Strategic HR Review, 8(4), pp.5-10. DOI: 10.1108/14754390910963856.

About the authors

Dr Mark Jamieson is a leadership coach specialising in women in business and youth leadership. He is the founder of The Jamieson Partnership and the GreenWing Project working in underserved areas promoting young people. He is currently working on the book version of this paper, Evaluating Leadership Coaching for the Future: why evaluation counts and how to do it, and researching a new book on youth homelessness.

Professor Tony Wall is Founder and Head of the International Centre for Thriving, a global scale collaboration between business, arts, health, and education to deliver sustainable transformation. He has published 200+ works, including articles in quartile 1 journals such as The International Journal of Human Resource Management, Journal of Cleaner Production, and Vocations & Learning, as well as global policy reports for the European Mentoring & Coaching Council in Brussels and Lapidus International which have been translated into 20 languages. His academic leadership and impact has attracted prestigious recognition through The Advance-HE National Teaching Fellowship (awarded to less than 0.2% of the sector) and multiple Santander International Research Excellence Awards. He actively collaborates and consults with large organisations and is developing licenses to enable wider global impact of this work.

Dr Neil Moore is the MBA (WBIS) Programme Director in the Centre for Work Related Studies at the University of Chester. He lectures, tutors and consults in a range of business and management areas, including international business, management development, contemporary management issues in small and medium sized enterprises and sport management. His interest in business, management and sport led to his doctoral research into business management practices in the English professional football industry. He has also researched and published in a range of other areas including coaching, talent management, organizational behaviour, event management and research methodology.

Details

  • Owner: Hazel King
  • Collection: IJEBCM
  • Version: 1 (show all)
  • Status: Live
  • Views (since Sept 2022): 623