Apr 132011
 

Training – The Success Case Method: A Strategic Evaluation Approach to Increasing the Value and Effect of Training 

Training: By Robert O. Brinkerhoff 

The training problem and the training solution. Despite the fact that effective human resource development (HRD) operations are vital to overall organization success, most organizations fail to evaluate the impact and return on training investments that they could and should. Traditional evaluation models and methods, with their focus on simply assessing the scope of training’s effect, do little to help reap greater performance and organizational impact from HRD and, in fact, can even undermine this purpose.

This article argues that it is training performance, not HRD, that achieves (or does not achieve) results, and thus impact evaluation must inquire more broadly into the performance management context. Consequently, the Success Case Method (SCM) is presented and discussed. The final portion of the article presents a case study derived from a recent SCM evaluation project for a major business client that demonstrates and illustrates the working of the method. 

Keywords: training evaluation; success case method; SCM; performance Improvement 

In today’s globally competitive changing market and constant technological advancement, training is a given. Doing training well—getting results from learning investments—is a must, not a choice. The pace of change persistently shortens the “shelf life” of employee capability. Organizations must continuously help employee’s master new skills and capabilities. The central training challenge for organizations today is how to leverage learning—consistently, quickly, and effectively—into improved performance.

Responsibility for creating and maintaining performance improvement does not typically lie with any one individual or organization unit; rather, it is a diffuse responsibility shared among senior executives, line management, human resource development (HRD) professionals, and perhaps others such as quality engineers or internal consultants. This diffusion of responsibility poses a severe challenge for HRD professionals and especially for the task of evaluating HRD initiatives. 

A Whole Organization Strategy for Evaluation of Training

Achieving performance results from training is a whole organization challenge. It cannot be accomplished by the training function alone. Despite this reality, virtually all training evaluation models are construed conceptually as if training were the object of the evaluation. Improving the quality and enjoyability of a wedding ceremony may reap some entertaining outcomes for wedding celebrants, but this may do little to create a sustained and constructive marriage. We in the HRD profession continue to rely on evaluation methods and models that evaluate the wedding (a training program) when what we need is a more productive marriage—sustained performance improvement that adds value. What is needed is evaluation of how well the organization uses training. This would focus inquiry on the larger process of training as it is integrated with performance management and would include those factors and actions that determine whether training is likely to work (get performance results) or not.

Training alone operates only to increase capability. But whether employees perform to the best of their capability or at some level less than their best capability is driven by a complex host of factors, typically and popularly lumped together under the rubric of the “performance management system” (see, e.g., Rummler & Brache, 1995). Although these factors may not be organized or even viewed as a systemic entity, they nonetheless operate as a system, either suppressing or enhancing employee performance. Although there has not been enough research on precisely how these and other factors enhance and impinge on training effect, we do know enough to be certain that there is more to achieving training effect than simply putting on good training programs. Tessmer and Richey (1997), in their summary of training effect research, demonstrate convincingly that training effect—defined as improved performance—is a function of learner factors, factors in the learner’s workplace, general organizational factors, and of course, factors inherent in the training program and interaction itself. This interdependence of training on the larger performance system has been amply supported as well by the previous and thorough research of Tannenbaum and Yukl (1992).

It would be very convenient if training were the sort of mythical “magic bullet” that trainers and managers might wish. How nice it would be if the typical practice followed in organizations would really work. Then all that would be needed is to build or buy good training programs, schedule and encourage employees to participate in them, and all would be well. Although this is indeed the way that many organizations go about training operations, it just does not work this way. When training is simply “delivered” as a separate intervention, such as a stand-alone programmer seminar, it does little to change job performance. Most research on HRD shows that the typical organizational training program achieves, on average, about a 10% success rate when success would be defined as training having contributed to improved individual and organizational performance (e.g., Baldwin & Ford, 1988; Tannenbaum & Yukl, 1992). That is, for every 100 employees who participate in training programs, about 10 of them will change their job performance in any sustained and worth while way. Unless one is going to be satisfied with continuing to let nearly 90% of one’s training resources go to waste, then some sort of new approach is needed.

Organizations that are serious about achieving better results from training not only must work to improve the quality and convenience of training (or to reduce the costs of ineffective training by putting it all into online formats) but must work on the entire training-to-performance process, specifically on those non training factors of the performance management system that bear on whether training-produced skills get used in improved individual and business performance. Working on the entire process means involving all the players: employees, training leaders, the line managers of learners, and senior leadership. What is needed is an evaluation approach that is based originally and principally on the reality that training effect is a whole-organization responsibility. Providing feedback to the HRD department or function is partially useful and cannot be the sole focus of evaluation. Other players in the organization should be involved in the process to make training work.

An evaluation framework that responds to this whole-organization challenge should focus on three primary questions:

1. How well is our organization using learning to drive needed performance improvement?

2. What is our organization doing that facilitates performance improvement from learning? What needs to be maintained and strengthened?

3. What is our organization doing, or not doing, that impedes performance improvement from learning? What needs to change?

   These key questions should be embedded in an evaluation strategy with the overall purpose of building organizational capability to increase the performance and business value of training investments. This strategy is essentially an organizational learning approach aligned with the overall training mission, which is likewise to build organizational capability through learning. Implementing this evaluation strategy requires that evaluation focus on factors in the larger learning-performance process to engage and provide feedback to several audiences.

Figure 1 shows that evaluation inquiry is focused on the entire learning performance process, from identification of needs, to selection of learners, to engagement in learning, to the transfer of learning into workplace performance. Evaluation findings and conclusions are provided to the “owners” of the effect factors unearthed by the evaluation. The owners are encouraged to take action to nurture and sustain things that are working and to change things that are not. The ultimate goal of this evaluation inquiry is the development of the organization’s learning capability—its capacity to manage learning resources and leverage them into continuously improved performance.

The final portion of the figure reminds us that evaluation has a clear and constructive purpose. It is not self-serving and not defensive. Neither is it solely for the benefit of the HRD function. Like learning and performance services themselves, evaluation should be another tool to improve performance and business results. This also reminds us that the line management side of the organization and the HRD function jointly share responsibility and leverage for this capability. Neither party alone can assure success, nor can either alone take credit. The performance improvement process has learning at its heart, but learning and performance are inseparable. Learning enables performance, and performance enables learning. Evaluation of training, when embedded in a coherent and constructive strategic framework like the one presented, is an effective tool for organizational learning and capability building. It not only is consistent with the concept of shared ownership but is also a method for achieving and strengthening partnership.

The Success Case Method 

The Success Case Method (SCM) is a process for evaluating the business effect of training that is aligned with and fulfills the strategy discussed (Brinkerhoff, 2003). The SCM was developed to address the frustrations with other more traditional evaluation approaches. Specifically, the venerable Kirkpatrick (1976) model was not suitable because it does not include inquiry beyond a “training alone” view and has no focus on the larger performance environment. There have been other more elaborate and sophisticated evaluation approaches based on experimental design frameworks. Those approaches require incorporating control groups and other techniques for analyzing variance and partialling out causal factors in evaluating training. However, they are far too unwieldy and require time, resources, and expertise beyond the scope of the typical professional setting.

The SCM, on the other hand, is relatively simple and can be implemented entirely in a short timeframe. It is intended to produce concrete evidence of the effect of training (or the lack of it) in ways that senior managers and others find highly believable and compelling, relating verifiable incidents of actual trainees who used their learning in specific behaviors that can be convincingly shown to lead to worthwhile results for the organization. Last, it is based on the assumption that when we find evidence of training effect, it is always a function of the interaction of training with other performance system factors. It does not attempt to isolate the effect of training, as doing so flies in the face of everything we know about performance-systems thinking and the inseparability of learning and performance.

If training has worked, it is because everyone on the performance management team has made contributions. The SCM seeks out and identifies these factors, so that credit can be placed with, and feedback provided to, the credit-worthy parties. If it has not worked, then the SCM pinpoints the weaknesses in the system and directs feedback to those who can address the problems. Above all, the SCM is intended to help all stakeholders learn what worked, what did not, what worthwhile results have been achieved, and most important, what can be done to get better results from future efforts. 

The Structure of a Success Case Method Study

An SCM study has a two-part structure. The first part entails locating potential and likely success cases—individuals (or teams) who have apparently been most successful in using some new capability or method provided through a training initiative. This first step is often accomplished with a survey, although it may be possible to identify potential success cases by reviewing usage records and reports, by accessing performance data, or simply by asking people. A survey is often used because it provides the additional advantage of extrapolating results to get quantitative estimates of the proportions of people who report using, or not using, some new method or innovation. Also, when careful sampling methods are employed, the probability estimates of the nature and scope of success can also be determined.

The second part of an SCM study involves interviews to identify success cases and document the actual nature of success. The first portion of the interview is aimed at screening. In this part, we need to determine whether the person being interviewed represents a true and verifiable success. Upon verification, the interview proceeds to probe, understand, and document the success. This interview portion of the study provides us with stories of use and results. It is important that these interviews focus on gathering verifiable and documentable evidence so that success can be proven—supported by evidence that would “stand up in court.”

Typically, an SCM study results in only a small number of documented success cases—just enough to poignantly illustrate the nature and scope of the success the program helped to produce. In our study of success with emotional intelligence training at American Express Financial Advisors, for example, we surveyed several hundred financial advisors. In the end, however, it was necessary to report only five success case stories to amply and fairly illustrate the significant bottom-line effect of the program.

Almost always, an SCM study also looks at instances of nonsuccess. That is, just as some small groups have been very successful in applying a new approach or tool, there are likewise other small groups at the other extreme that experienced no use or value. Investigating the reasons for lack of success can be equally enlightening. Comparisons between the groups are especially useful. In the case of successful use of laptops by sales reps, for example, we found that almost every successful user reported that his or her district manager provided training and support in the use of the new laptop. Conversely, a lack of support was noticed in almost every instance of lack of success. Identifying and reporting the apparent factors that made the difference between success and lack of it (notice I avoid the use of the word failure) can be a great help in making decisions to improve a program’s effect.

Foundations of the Success Case Method

SCM combines the ancient craft of storytelling with more current evaluation approaches of naturalistic inquiry and case study. It employs simple survey methods allowing extrapolation of estimates of breadth of training transfer and effect. It also employs the social inquiry process of key informants and borrows approaches from journalism and judicial inquiry. In a way, we are journalists looking for a few dramatic and “newsworthy” stories of success and nonsuccess. Once found, we seek corroborating evidence and documentation to be sure that the success story is defensible and thus reportable—that is, corroborated by evidence that makes claims of effect and value believable beyond any reasonable doubt.

Likewise, the SCM leverages the measurement approach of analyzing extreme groups because these extremes are masked when the mean and other central tendency measures are employed. Training programs are almost never completely successful such that 100% of the participants use learning on the job in a way that drives a business result. Similarly, almost no program is ever a 100% failure such that no trainee ever uses anything for any worthwhile outcome. A typical quantitative method that uses mathematical reduction to derive effect estimates (a mean effect rating, for example) will always under represent the best and over represent its worst. Such is the tyranny of a mean or average: If you were to stand with one foot in a bucket of ice cubes and the other foot in a bucket of scalding water, on average you should be comfortable; in reality, you are suffering doubly!

The SCM achieves efficiencies by purposive versus random sampling, focusing the bulk of inquiry on only a relative few trainees. The underlying notion is that we can learn best from those trainees who either have been exceptionally successful in applying their learning in their work or have been the least successful.

Because the SCM searches out and inquires into the best learning applications and results of training, some have suggested that it shares a commonality with the emerging evaluation approaches known as appreciative inquiry, or AI (see, e.g., Whitney, Trosten-Bloom, & Cooperrider, 2003). To some extent, this is true, for both the SCM and AI seek the best applications and share a fundamental basis in human and organizational learning. Both approaches also aim at the longer-range goal of improving organizational effectiveness. But the SCM was developed independently of the AI approach and diverges from AI in that the SCM also inquires into least successful incidents. More important, AI is more than an evaluation approach, moving toward a theory of organization and management. The SCM has more modest aims: to build organizational learning about training and performance so that investments in learning can be more effective, accomplishing this goal through evaluation of training successes and failures.

   The Success Case Method Process

The SCM approach has two major steps. First, a brief survey is sent to a large representative sample of all trainees who participated in training. In essence, this survey asks one key question (although we may use several items to ask the question): “To what extent have you used your recent training in a way that you believe has made a significant difference to the business?” From the survey, we identify a small group of exceptionally successful, and unsuccessful, trainees. Then, in the second step, each of these small core samples is probed in depth, typically through telephone interviews. In probing the successes, we want to (a) document the nature and business value of their application of learning, and (b) identify and explain the performance context factors (e.g., supervisory support, feedback) that enabled these few trainees to achieve the greatest possible results. With the unsuccessful trainees, we aim to identify and understand the performance system and other obstacles that kept them from using their learning. The Success Case study produces two immediate results:

1. In-depth stories of documented business effect that can be disseminated to a variety of audiences within the organization. These stories are credible and verifiable and dramatically illustrate the actual business effect results that the training is capable of producing.

2. Knowledge of factors that enhance or impede the effect of training on business results. We identify the factors that seem to be associated with successful applications of the training and compare and contrast these with the factors that seemed to impede training application.

An SCM Case Study

The remaining part of the article presents and explains an SCM study conducted for SimPak Computers (a fictitious name for a global manufacturer and full-service vendor of information technologies). This case shows how the SCM was used to identify and document substantial business effect of training, how a significant proportion of training participants did not achieve valuable results, and how the causes for this partial lack of effect were isolated and addressed with the SCM. 

   The Organization Context 

SimPak is one of several major computer equipment and service providers that operate worldwide. The organizational unit in which the evaluation was conducted was a part of the services division that both sold large computer server systems to customers and provided service on a contracted basis to maintain the servers and assure their effective and efficient performance to customers. This area of the company was highly important, as it generated large revenues and profits. Customers in this division were highly visible and important; they required and expected flawless service and could not afford an outage due to faulty equipment or poor service.

Service technicians in the technical services area were the trainees involved in the program under evaluation. The technical services area was a part of the larger field services division. Whereas the field sales portion of the services area reported to a different vice president, field sales and technical services needed to work closely together to retain customers, sell continuous upgrades, and install and maintain customers’ equipment. Technicians were assigned to a certain set of customers and it was their responsibility to keep the customers’ systems working well and keep the customers fully satisfied. Meanwhile, SimPak was constantly engaged in research and development. New products and technologies and new upgrades were constantly introduced to stay ahead of the competition. This meant that technicians were in constant need of training and retraining, as new equipment and new modalities for existing equipment were introduced. 

   The Training Context 

The training program was a 2-week residential course. The purpose was to teach technicians how to install and initialize Maxidata servers and associated hardware, a fictitious brand for the company’s largest and newest server, and some highly complex peripheral equipment that enabled the server at the customer’s site to communicate with other internal server elements and external databases. Initialization of the hardware boxes was crucial, as without proper initializing codes and settings, they would not operate effectively or at all. Initialization is a complex process taking a minimum of several hours during which certain parts of the client’s server must be shut down temporarily.

The course itself included technical presentation of complex material and an intensive set of application exercises on a simulator that enabled participants to practice initialization of various hardware and configurations. The simulation portion of the course was vital to success, as this was where the technicians practiced various and complex procedures. A key portion of this instruction was practice in using a large technical manual, particularly in using the manual as a reference guide to input troubleshooting codes and in interpreting the many different code variations the server could repeat back  in response to each test procedure. It is important that the course was limited in attendance due to the reliance on the practicing simulator. Usually no more than 16 technicians could take the course at one time. The cost of the server was prohibitive. Sales had informed the training department that they were lucky to have the server for training at all. In fact, there had been suggestions from training skeptics to tear it out and sell it to a customer; these suggestions were barely fended off by the training department. In short, training capacity was limited and would stay limited.

Several other issues and factors related to the training context are important for understanding the case.

Class availability. As noted, the course duration was 2 full weeks. However, because the simulator needed to be recalibrated and reset for each new class, it was not possible to conduct a class every 2 weeks. Further, the simulator system was also in demand by research and development that needed to use it to test new versions of Maxidata equipment. These factors, and other field service demand and operational cycles, conspired to allow only 16 classes to be conducted each year. Thus, there was always a waiting list for the class, and it was not unusual for a technician who needed the training to wait several months. 

   Participant enrollment. Participants for the course were nominated by their territory manager. Admission was accomplished on a first-come, first-served basis, except that course administrators would defer admission to any second applicant from the same service territory because of the space limitation. The rationale was that no single territory could use up two seats in the course if there were others waiting to participate from other territories. Territory service managers were allocated only a limited training budget and limited access to this and other courses.

   Political tensions. The client for the evaluation was the training department in the field services area, the business division that was responsible for providing support services to customers who had purchased the SimPak servers. This department paid for the training. If there were problems with the training, they were the first to receive complaints. If a customer was not satisfied with service, field sales were at risk, as unhappy customers were unlikely to stay as future customers. Typically, customer expressions of dissatisfaction were blamed on the other division. If service received a complaint, they invariably said it was the fault of sales that made unrealistic promises for installation and service or oversold system capabilities. If sales received a complaint, they always said it was service’s fault. Wherever a problem really was because of a failure of service, the training department was quick to get the blame, as they were accused of providing inadequate training, inadequate access to training, wrong information and so on.

   Divided ownership. The training program was “owned” by a separately administered technical support division that had been a part of a previously acquired organization and reported to a different part of the complex SimPak organization. This division designed the training, hired and managed the training staff, and maintained the simulator. They felt over managed and micromanaged by the training department that owned the budget. On the other hand, the training department, being in the same division as the field sales and field support functions, felt all the pressure of making the course successful and delivering results. But they really had no control over the program and could not influence it other than to manipulate the budget.

In response to complaints from field sales about the effectiveness of the training, the training department decided it was time to commission an evaluation. The evaluation was intended to settle the question of whether or not the course was good enough. Did itwork well and serve the business needs it was intended to?

Preparation and the Impact Model 

The first task was to understand the expected goals for the course and determine how it was intended to address business needs. Based on discussions with a number of key stakeholders, we decided to include in the evaluation all of the service technicians who had completed the course in the past 10 months. This would go back in time far enough to allow a reasonable number of completed participants covered by the evaluation. Twelve classes were identified, totaling 172 participants.

Based on the discussions with several stakeholders, particularly the course owners and the manager of technical support, we created and confirmed the Impact Model. It is presented in Table 1.

The Impact Model is simple, despite the complexity and technical nature of the training. For purposes of an SCM evaluation, there was no need for us to have a detailed understanding of the course content. We did need, however, to understand how the learned skills would be applied, and because this involved technical applications, we did make sure that we knew the names of the several products involved (the Maxidata server and the several pieces of peripheral equipment). We further learned what general function each piece served. 

   Planning the Survey 

Names and e-mail addresses of all participants who had completed the course in the past 10 months were readily available. With few training application opportunities, there was no need for a long list of training application items. Further, there was a good deal of demographic information available that was keyed to the participants. Thus, given a participant name, we could identify the customers served by that service representative, the geographic region, and related information. All of this combined to allow a very brief survey instrument that could be administered by e-mail. 

   Evaluation Results 

Of the 172 participants surveyed, completed surveys were received from 127 participants (74%). To test for a possible response bias, a random sample (12 participants) of those 45 non respondents was surveyed by telephone. Of the 127 respondents, 77 (61%) indicated that they had made at least one application of the Maxidata course learnings in a customer service situation. It is surprising that of all the respondents, 50 (39%) indicated that they had made no use of the Maxidata course learning since completing the course. 

   Possible survey response rate bias.  In SCM studies, the survey tends to find more users of the learning responding to the survey whereas non respondents tend to be those with little or no learning application. Calls to a random sample of 12 non respondents to the survey found only one more person who reported usage of the training. The outcome of that one instance was rated by the service representative as less than fully successful. This led us to believe that the non respondents were likely to represent those who did not make much successful use of their training. Although we did not add all of these numbers into the reported nonusers, we did feel very comfortable in reporting that the 39% who did not use the training at all were probably an underrepresentation and that the actual proportion of trainees who made no use of the training was greater than 40%. For purposes of simple and clear communication, we reported a non use estimate of 40% in the remainder of our reporting documents and activities. 

   Outcomes of service applications. The survey asked respondents to rate the outcomes of the service they provided using their Maxidata training. Of those who reported using their Maxidata learning with a customer, all claimed a positive result. Follow-up phone interviews confirmed the success cases. We could not find any substantial differences in the nature and value of the service outcomes achieved among those respondents who reported adequate outcomes, those who reported very successful outcomes, and those who reported something in between these choices. There was no doubt from the findings that the training, when applied, was successful with satisfaction to the customer. But such results were claimed for only 60% of the respondents. Given the non responses, it was likely that this 60% estimate was exaggerated.

Evidently, the training was working well when it was being used. However, there was a large proportion of the trainees who were making no use of their training at all. Had this been a mandatory, soft-skill training, this finding would not be surprising. Yet, this was technical training for a very specific purpose: initializing and servicing newly installed customer computer equipment. And, it was very expensive training that took a service technician out of the field for a full 2 weeks. The cost of training all of these nonusers was in the hundreds of thousands of dollars. Given the pressure to serve customers and save money in a market of declining profits, the question would be, Why would people go through training and not use it—not even once?

   The Rest of the Story 

Our interviews of the nonusers (nonsuccess cases) quickly cleared up the mystery. The explanation was simple although quite unreasonable. Technicians who did not use their skills reported that they had no customers who had purchased the DUCK box peripherals, thus there was no need or opportunity for them to apply the course skills. The next question was, “Why did you attend the training?” The interviews revealed the answers as the following.

Sales projections were viewed as unreliable. Even though no customer was scheduled for a DUCK box sale and installation, district managers wanted someone on their staff to be ready with the skills, just in case. Further, district managers knew there was a waiting list for the course, so they placed a technician on the list to assure a placement. Again, this was just in case they had a need because of a customer purchase. The waiting list was long, partly because so many technicians who did not need the course were taking it anyway. This was made worse by some technicians re-enrolling for the course, because when they first took it, they had no customer with the product, but then when they did have a customer, so much time had passed that they needed the training again.

In sum, the course was highly effective. It taught the skills well. When the skills were needed, virtually every technician was able to fulfill customer expectations. We uncovered and documented success cases where a trained technician “saved the day” by quickly solving a problem that would have led to a disastrous customer outage. In one success case, a technician was confronted by 30 inoperable DUCK boxes. The customer was the NASDAQ stock exchange. Had the outage gone even minutes longer than scheduled (service was temporarily transferred to an airline’s server), the entire stock exchange would have crashed causing millions of dollars in losses. But the technician recalled exactly how to look up the trouble codes, discovered a coding error in the delivered equipment, made quick diagnoses and repairs, and got the server working before a crash could occur.

It is clear that the training was contributing to a very positive ROI when the worth of outcomes achieved was compared to the costs. But the 40% nonuse rate was seriously undermining the cost-effectiveness of the course. Worse, the strange practice of enrolling people who could not use the training caused a severe backlog, risking that a person who needed it would not get it, thus leading to a potential customer outage and/or complaints.

Although it took several months of discussions between divisions and functions, we were finally able to convene a phone conference among the senior players in the divisions affected to discuss the findings and conclusions. They reached a decision to redesign and tighten up enrollment procedures to reduce enrollments such that only those technicians who had a true need could get access to the training. There was certainly some business value for some degree of overtraining, as this would assure that the “just in case” scenario would not lead to an un served customer. To address this issue, they agreed that, in the event a service area had a customer with the DUCK boxes but no trained technician, a trained technician could be on loan from another district with no loss of incentive or bonus pay or any other such financial penalty to either district involved.

The result of this SCM project was that the client gained a large increase in the ROI of a training investment, which in turn led to a greater assurance of effective customer service, all without changing any aspect of a training program itself. When viewed from the perspective of the larger learning performance system, the training program had some serious flaws. But it took a more systemic inquiry to uncover these and get the right information to the right people.

Concluding Remarks

The SCM assesses the effect of training by looking intentionally for the very best that training is producing. When these instances are found, they are carefully and objectively analyzed, seeking hard and corroborated evidence to irrefutably document the application and result of the training. Further, there must be adequate evidence that it was the application of the training that led to a valued outcome. If this cannot be verified, it does not qualify as a success case.

Almost always, an SCM study turns up at least some very worthwhile applications of training that lead to valuable results worth well more than the cost of the training. Equally often, however, there are large numbers of participants who did not experience such positive results. These stories become rich ground for digging into underlying reasons. When the impediments to effect are compared to the factors that facilitated effect, a coherent pattern typically emerges, leading directly to obvious changes in the training and performance environment that, if they were changed, could lead to greater effect. That is, because we know from the success cases what the training effect is worth when it happens, we can make an economic argument for what it would be worth to get more effect and thus compare this to the costs of what it would take—in terms of changes to related systems and factors—to get that enhanced effect. In this way, an SCM study opens the door to performance consulting, giving the HRD practitioner greater strategic access and leverage to make a difference while at the same time helping clients build their capability to get more effect and return on their training investments. 

References 

Baldwin, T. T., & Ford, S. K. (1988). Transfer of training: A review and directions for future research. Personnel Psychology, 43, 63-105.

Brinkerhoff, R. O. (2003). The success case method. San Francisco: Berrett-Koehler.

Kirkpatrick, D. L. (1976). Evaluating training programs. New York: McGraw-Hill.

Rummler, G. A., & Brache, A. P. (1995). Improving performance: How to manage the white space on the organization chart (2nd ed.). San Francisco: Berrett-Koehler.

Tannenbaum, S., & Yukl, G. (1992). Training and development in work organizations. Annual Review of Psychology, 43, 399-441.

Tessmer, M., & Richey, R. (1997). The role of context in learning and instructional design. Educational Technology, Research, and Development, 45(3), 85-115.

Whitney, D., Trosten-Bloom, A., & Cooperrider, D. (2003). The power of appreciative inquiry: A practical guide to positive change. San Francisco: Berrett-Koehler. 

Robert O. Brinkerhoff is a professor of education at Western Michigan University. Actively involved in HRD evaluation research and practice for several decades, he is the author of 10 books in HRD. He is a well-known HRD international speaker and consultant.

Brinkerhoff, R. O. (2005). The Success Case Method: A strategic evaluation approach to increasing the value and effect of training. Advances in Developing Human Resources, 7 (1), 86-101.

Advances in Developing Human Resources Vol. 7, No. 1 February 2005 86-101

DOI: 10.1177/1523422304272172

Copyright 2005 Sage Publications

http://aetcnec.ucsf.edu/evaluation/Brinkerhoff.impactassess1.pdf

http://ipttoolkit.wordpress.com/2011/02/08/brinkerhoffs-success-case-method-an-ipters-dream/

http://blogs.ubc.ca/evaluation/files/2009/02/success20case20method.pdf

==================================================================

To Discuss how these Solutions will add value for you, your organization and/or your clients, Affinity/Resale Opportunities, and/or Collaborative Efforts, Please Contact:

Tom McDonald, tsm@centurytel.net; 608-788-5144; Skype: tsmw5752

training, McDonald Sales and Marketing, LLC