Navigation SearchNavigation ContactNavigation Products
Resources

The Validity of DDI Assessment Centers

In 1956 DDI cofounder Douglas Bray introduced the first industrial application of the assessment center at AT&T. Since then, researchers have studied assessment center validity and documented thousands of successful applications (e.g., Bray, Campbell, & Grant, 1977; Byham, 1970; Hunter & Hunter, 1984; Schippman, Prien, & Katz, 1990; Schmitt, Gooding, Noe, & Kirsch, 1984; Thornton & Byham, 1982). Organizations now use assessment centers for a wide variety of purposes, including selection, placement, early identification of management potential, promotion, development, career management, and training. There is no question that assessment centers are predictive of on-the-job performance as well as future performance.

DDI has been significantly involved in the advancement and development of new assessment center technology. Our work with videotaped assessment, behavioral rating methods, and automated assessment has made us leaders in this field. DDI has repeatedly proven our ability to create assessment centers with strong construct, content, and predictive validity.

Examples of Valid Assessment Centers

A meta-analysis of 50 assessment center studies, containing 107 validity coefficients, demonstrates that assessment centers show strong predictive validity (Gaugler, Rosenthal, Thornton, & Bentson, 1987).

Validity coefficients show the strength of the relationship between assessment center scores and other methods for assessing performance (0 indicates no relationship; 1 indicates a perfect relationship). These relationships are consistent across assessment centers used for a variety of different purposes (e.g., promotion, selection, etc.). Although results from individual studies vary, this comprehensive study definitively supports the value of assessment centers, as shown in Table 1.

Table 1: Weighted Validities Corrected for Artifacts

Coefficient
Total .37
Performance Criteria
Ratings of General Performance .36
Ratings of General Potential .53
Ratings on Dimension .33
Performance in Training .35
Career Advancement .36
Purpose of Assessment Center
Promotion .30
Early Identification .46
Selection .29
Research .48

DDI once designed an assessment center at an appliance-manufacturing organization to develop existing leaders and prepare them to advance to higher organizational levels. The center provided participants with feedback and linked their strengths and weaknesses to a long-term mentoring process. During the 6 to 12 months after the assessment, the participants' managers provided ratings of their current job performance along the same dimensions (or competencies) measured in the assessment.

Table 2 shows that assessment center ratings correlate significantly with job performance ratings.

Table 2: Appliance-Manufacturing Organization

Assessment Ratings Coefficient
Customer Focus .37**
Visionary Leadership .33**
Empowerment .30*
Managing the Job .38*

N = 49, *p < .05, **p < .01
Correlations adjusted for reliability of the criteria.

The Management Assessment Program at Northern Telecom was designed to determine candidates’ readiness for promotion into management and to diagnose their developmental needs. To achieve this, DDI worked with Northern Telecom to identify relevant job dimensions and create an assessment center. The assessment center eventually included a variety of assessment techniques and provided participants with detailed reports about their performance on the dimensions. To validate the assessment center scores, performance criteria data were collected from participants’ peers. Table 3 shows how the assessment center had numerous, sizeable correlations with job performance data.

Table 3: Northern Telecom

Assessment Center Dimensions Coefficient
Customer Service Orientation .24*
Influence .22*
Innovation .30*
Job Fit .34*
Multiple R .38

N = 61, *p < .05

A telecommunications company worked with DDI to design a two-day assessment center that included 11 exercises constructed to simulate a “day in the life” of a vice president in a progressive, high-technology organization. Approximately 140 directors were rated on each of the 11 dimensions included in the executive profile. Ratings addressed employees’ potential performance at the VP level and their performance areas in need of improvement (actual ratings were Development Needed, Acceptable, and Superior).

Ratings from the executive assessment process were in high agreement with supervisor ratings of potential performance at the vice president level and of development need (Table 4). In other words, participants who were rated highly by their supervisors were more likely to be rated highly in the assessment center.

Table 4: Telecommunications Company

Dimension Perf. Potential Devt. Need
Customer Service/Marketplace Focus 94% 95%
Oral Communication/Presentation 93% 92%
Problem Solving/Decision Making 89% 91%
Planning & Execution 87% 87%
Commitment to Team Approach/Empowerment 87% 86%
Developing Organizational Talent 86% 85%
Initiative 86% 83%
Executive Disposition 85% 83%
Managing Performance 83% 83%
Professional Knowledge/Global Awareness 81% 81%
Strategic Vision 60% 76%

N = 140

DDI used a supervisory assessment center at a utility company to evaluate candidates for promotion into the position of Supervisor–Distribution Lines. The assessment center used four simulations to evaluate candidates on 10 dimensions of job performance. Assessment center ratings were correlated with a composite measure of job performance ratings by 33 supervisors who had been in their positions for at least a year. The validity study found a multiple correlation of R = .45, which indicates a meaningful relationship between assessment center ratings and ratings of job performance.

Linking Assessment and Development

Executives themselves also report a high level of perceived value following their participation in an executive development program. Dusenbury (1993) reported reactions from 64 executives who participated in a program and then prepared their development plans following feedback. All but one of the participants (98 percent) reported the assessment portion of the program as being “vital or very important” as a developmental tool. Also, nearly half of the participants (49 percent) rated assessment as the “single most important” type of developmental tool. This rating was much higher than ratings for the importance of training (27 percent), mentoring (7 percent), or skill projects (5 percent).

Rifkin and Heine (1993) also reported highly positive perceptions from participants of an executive assessment and development program. Examples of positive outcomes, which were reported up to 18 months following the completion of the development center, included:

  • Becoming more effective managers.
  • Identifying specific ways to enhance skills.
  • Gaining a clearer understanding of strengths and developmental needs.
  • Earning an increase in compensation due to the development.

A coatings, paint, and glass manufacturer worked with DDI to introduce an Executive Development Process (EDP) for more than 100 of its high-potential executives. After participants engaged in a full day of assessment, created development plans, and took part in developmental activities, the organization observed many positive results, including:

  • 90 percent of participants rated as highly motivated to develop.
  • 68 percent of participants rated as having a high level of knowledge or awareness improvement.
  • 53 percent of participants rated as having a high level of skill improvement.
  • 73 percent said the development process was highly valuable.
  • Validity coefficients for the EDP were significant and within commonly accepted standards (e.g., the coefficient for supervisor ratings and assessment center performance was .28).
  • 95 percent of participants readily accepted assessment feedback as accurate and as reflecting their true skills; 96 percent expressed that they would use the feedback to develop and create an action plan for improvement.

At a two-year follow-up of EDP participants, assessment center results showed a statistically significant improvement for 7 out of 11 dimensions targeted for development.

How DDI Creates Valid Assessment Centers

Assessment requires rigorous standards for ensuring accuracy. DDI carefully follows established guidelines to create and manage our assessment centers. In fact, we have published procedural and ethical guidelines for assessment centers that have been endorsed by the International Congress for the Assessment Center Method. The following components describe DDI’s actions to promote valid and useful assessment centers.

Job Analysis

  • People knowledgeable about the target job (i.e., incumbents and supervisors) provide behavioral examples of job activities required for success on the job. Ideally, a representative sample of job content experts then rates the frequency and importance of the job activities.
  • Job analysts identify an initial set of competencies based on multiple examples of relevant behaviors described in the job activities.
  • Job analysts select the final competencies based on job content experts’ ratings of importance. The rating questionnaire includes representative job activities as well as the critical behaviors (Key Actions) that define the competency.

Simulation Design

  • Exercises or simulations are designed to elicit behaviors similar to those expected on the job and to reflect a significant component of the job activities. One method for judging the relevance of the simulation is to compare activities in the Job Activity Questionnaire with those in the simulation.
  • DDI targets our exercises to only a few dimensions. We do not create a simulation and then see if it measures a competency; rather, we specifically design it to elicit behavior related to particular competencies.
  • The simulation designer includes prompts to elicit behavior related to the Key Actions that define each competency. This ensures that a participant can demonstrate all the important aspects of a competency in the exercise.

Assessor Standards

  • Assessors are given extensive training to differentiate participants’ behaviors related to the competencies being measured. This includes general training on evaluating competencies as well as specific training on particular exercises.
  • Rating a competency involves human judgment. DDI provides assessors with behavioral examples to guide their judgment and holds calibration sessions to establish and confirm standards.
  • After documenting behavioral examples from simulations, assessors evaluate the relevant Key Actions. They use holistic judgment in rating the competency, based on their Key Action ratings, the relative importance of the different Key Actions, and the overall result of the participant’s behaviors in achieving the purpose of the competency.

Technology Integrity

  • DDI continually checks agreement among our internal assessors. This takes many forms, including double scoring, quality checking, and formal reliability analyses. If assessors agree that observed behaviors illustrate the same level of performance on the same competency, then they have evidence that the simulation is eliciting behavior related to that competency.
  • DDI continually checks the distribution of Key Action and dimension ratings from the assessment exercises we score. If competency ratings are normally distributed over a large sample of participants, then that exercise whows that it is eliciting a range of behaviors related to the competency.
  • If assessors observe no behavior for a Key Action, they rate it as “0.” If a large number of “0” ratings appears in the frequency distributions for a design, it is a sign that the simulation is not eliciting that Key Action. DDI then adjusts the simulation design ensure that it fully measures the desired competency.
     

Talk to an Expert: The Validity of DDI Assessment Centers

* Denotes required field
 *
 *
 *
 *
 *
 *
 *
 *
 Security code