How to Measure Results of Leadership Development

Ultimate Guide to Leadership Development

What Does Success Look Like in Leadership Development?

We’ve all heard the adage “You can’t manage what you don’t measure.” Unfortunately, research shows that only 18% of organizations measure the business impact of leadership development initiatives. But in most companies, senior leaders are increasingly demanding clearer measures of results.

The biggest challenge to measurement is that it’s often an afterthought. Without key measures and metrics built in along the way, it can be hard to go back and collect the data you need. That’s why it’s critical to think about measurement first.

To begin planning how you’ll measure results, ask the following questions:

  • How will this leadership program impact the business, and is it in line with my stakeholders’ needs?
  • What would success look like one year from now? Three years from now?
  • What data will be valuable to my stakeholders?
  • What data collection methods do I have available?
  • Who is involved and accountable for tracking progress and measuring results?

Unsurprisingly, when you can show proof of results, you not only benefit the business but build your own credibility. As a result, you may have an easier time gaining the support you need for future initiatives.

In this section, we’ll walk you through several different ways to demonstrate results.


Leadership Can Be Learned, but How Is It Measured?

How do you measure “good” leadership? With our roots in behavioral psychology, DDI’s approach focuses on understanding leadership as observable and measurable behaviors that can change over time. We believe that leadership programs can and must deliver behavior change to demonstrate success.

Of course, it’s not the easiest thing to measure behavior change. It’s much simpler to measure success based on “checking the box” that people attended or completed a certain training. But that doesn’t guarantee that leaders are really developing and using the skills they need to be better leaders.

Whether you’re looking to measure simple results like attendance or connect development more deeply to business results, the key is to set up your program from the beginning to collect the right types of data.

In this section, we’ll cover one of the most popular methods to measure impact, the Kirkpatrick Model. This model evaluates learning and training across four levels. It’s the most common approach for L&D practitioners. 


Level 1 Evaluation: Measure Reaction

How do your leaders respond to your program? In the Kirkpatrick Model, this level is the degree to which participants find the training favorable, engaging, and relevant to their jobs. While many organizations measure Level 1 with “smile sheets,” this model goes beyond participant satisfaction and also includes:

  • Engagement: The degree to which participants are actively involved in and contributing to the learning experience.
  • Relevance: The degree to which training participants will have the opportunity to use or apply what they learned in training on the job.

Alone, these measures don’t tell you whether leaders have actually gained skills. But they can help you predict two key things:

  • Personal Motivation: Are leaders personally invested in the program? When leaders have high personal motivation, research has shown that they are more likely to apply skills back on the job.
  • Job Relevance: Did the program provide opportunities to practice new skills before applying them back on the job?

From our own research, personal motivation and job relevance are top factors that predict application of skills or behavior change. So while they don’t prove that leaders have changed their behavior, they are positive indicators that your program is headed in the right direction.

However, many companies stop here. While this feedback is important to understanding how well you’re engaging participants, it doesn’t show whether participants change their behavior on the job.

person icon with arrows moving around them in a circular fashion, written to the right: Personal motivation and job relevance are top factors that predict application of skills or behavior change.

Level 2 Evaluation: Measure Learning

What should your leaders retain from your program? Kirkpatrick defines this level as the degree to which participants acquire the intended knowledge, skills, attitude, confidence, and commitment based on their participation in the training.

You can determine learning through post-program checks or by testing before and after the training to measure progress. First, you’ll need to identify specific learning outcomes that you want your program to deliver. Then you’ll need to test for those outcomes both before and after the training.

For example, DDI’s online courses include informal and formal knowledge checks. We also provide knowledge checks for many of our classroom courses.

These checks help to ensure that leaders understand the theory behind what to do, but not whether they are able to demonstrate the skill.


Level 3 Evaluation: Measure Behavior Change

Are your leaders applying what they learned in the program? In the Kirkpatrick Model, this level is the degree to which participants apply what they learned during training when they are back on the job.

Keep in mind that seeing behavior change can take time. Leaders may need to build their confidence and find the right opportunities to apply their knowledge and learning. You can measure behavior change through surveys or interviews, particularly with managers and direct reports of your program participants. It is imperative that survey participants have a clear understanding of the behaviors or competencies required for leaders to be successful.

At DDI, we measure behavior change by comparing how often leaders engage in effective leadership behaviors before versus after development. It’s important that managers, peers, and direct reports provide observations of behavior change. Program participants can also self-report their own change, which helps them to reflect on their own growth.

Circle graph filled in 55% of the way with the words,

Level 4 Evaluation: Measure Results

How do better leaders (i.e., behavior change in your program’s participants) impact the business? According to Kirkpatrick, this level measures the degree to which targeted outcomes occur as a result of the training. While this is the most comprehensive form of measurement, it’s also where you’re most likely to grab the attention of your stakeholders in the business.

Here are some examples from our research on how organizations evaluated bottom-line results after a DDI leadership program:

  • Reduced Turnover: More than 700 leaders from a global IT solutions organization participated in a DDI leadership development program aimed at improving employee productivity and performance. For teams involved in the program, employee turnover dropped from 20.4% to 4.8%.
  • Increased Sales: After implementing a leadership program for sales managers, a pharmaceutical company experienced an overall 105% increase in sales volume. Sales productivity increased by an average of 68% per representative whose managers completed the DDI program.
  • Safety: To promote a culture that supports employee development, motivation, and retention, nearly 400 employees from a manufacturing company participated in a DDI leadership program. Accidents decreased by 70% and employee turnover also fell by 90%.

Examples of Calculating ROI

When you quantify the financial impact on business, you can easily calculate the Return on Investment (ROI) based on the cost of your leadership program. Here are two more examples where organizations calculated the ROI of their leadership development program:

  • Higher Productivity: An automotive manufacturing company introduced a DDI program into several manufacturing plants with a history of low productivity and performance problems. Similar “control” plants were selected as a comparison group. Metrics of quality, on-time delivery of parts, productivity, health and safety, and absenteeism were tracked to determine the impact. Compared to the control plants, the experimental site showed a 21% improvement in productivity. And this resulted in an estimated $4.4 million in return.
  • Increased Cross-Sales: After their supervisors completed a DDI program, bank tellers demonstrated substantial improvements in their work productivity. This was evidenced by the bank tellers generating approximately four times more business referrals and new loans per month. They also increased cross-selling, which is the number of loans provided with additional life insurance, by 233%. In addition to the increased productivity, teller overtime was reduced by 92% overall.

Measuring Implementation Support

The success of your leadership development initiative also depends on your implementation plan and what support is available to learners. As part of our Impact Evaluation service, DDI measures the following:

  • Environmental Support: Does senior management strongly support the program? Are there opportunities to apply newly learned skills on the job? Are there any barriers to leaders participating in additional development opportunities? These external factors can inhibit or accelerate the effectiveness of your leadership program.
  • Manager Reinforcement: Are managers champions of development and do they show support? Do participants discuss opportunities to apply new skills with their managers? Manager support is one of the top three predictors for behavior change.
magnifying glass shown with a person icon inside, underneath it green, yellow, and red exclamation points, written to the right: Zero in on what’s working well and what’s not so you can improve your leadership by identifying issues and gaps in an impact evaluation. This is part of measuring results of your leadership development program.

Don’t Forget the Lead Indicators

While your ultimate goal is to drive business results and create behavior change in leaders, monitoring early lead indicators can help you determine whether you’re on track.

Lead indicators describe the effectiveness of your current strategy and indicate future outcomes. They can include:

  • Percentage of leaders reached across your organization
  • Participation or attendance rate of learners
  • Participation of managers of learners in manager support sessions
  • Content accessed (started or launched) from your LMS, LXP, or online learning platform
  • Completion rate for content

Create a Measurement Plan

As we mentioned, one of the biggest challenges with measuring the effectiveness of leadership development is that it often only happens as an afterthought. Without planning ahead of time, it can be much harder to collect data on program success.

That’s why it’s so important to create a plan to measure results as you design your program. A simple measurement planning grid can help you stay organized and on track. For each success metric you identify, you will need to determine:

  • Data sources
  • Data collection timeline or due date
  • Data output
  • Accountability
  • Any issues or support needed
measurement plan chart with column heads: Metric, Data Sources, Collection, Output, Accountability, and Reporting, under metric, row headers of: Content accessed from LMS, Level 1: Reaction, Level 2: Learning, with relevant row information for each about when data collection should happen, the outputs, and who is accountable

Measure Results to Plan Next Steps

Don’t forget to celebrate success! If you fall short on some goals, use it as an opportunity to learn and make improvements.

With a strong plan to measure results, you’ll have data that demonstrates the impact of your programs. You'll also have data to show the value you and your team bring to your organization. Plus, measuring results can create a data-driven approach to plan and design next steps.