|When someone finishes a training class they are usually prepared to begin using the skills they have acquired. They have learned what they needed to learn. But will they begin using the skills back on the job, especially if the new skill requires unlearning former behavior patterns in favor of new ones?
Edgar Schein, an author and consultant on change management points out that when a person is confronted with the prospect of learning something new, a “learning anxiety” is created. The learner’s learning anxiety interferes with their applying the newly learned skill. The list below contains some of the more common sources of learning anxiety, each with a suggestion or two for learning professionals to reduce or eliminate them (which Schein calls creating psychological safety).
It is up to those of us in the learning and development arena to help learners and their managers capitalize on survival anxiety and reduce learning anxiety!
Until next time…
Sign up for my free Sticky Notes newsletter:
Many end-of-training evaluations ask participants to respond to a statement like this: “I plan to (or will be able to) apply this training to my job”. Have you ever wondered if this is an accurate prediction that the training will stick? Technically this is referred to as intention to transfer.
Several research studies have been conducted that compare intent to transfer with actual use of skills on the job. In each case there was a fairly high connection between intention to transfer and the actual transfer. But —
- Don’t people often tell us what we want to hear (or what they think we want to hear)?
- How many people have kept their new year’s resolutions? (How many of us remember what they were?)
- Aren’t people’s perceptions of their own behavior often different from what other people see?
The answer is yes, people often tell us what they think we want to hear but this “socially desirable response” (SDR) bias has really only been studied on personal habits such as healthy food choices and substance abuse. There is no evidence that SDR plays a part in assessing workplace learning or intention to use it. Yes, people’s perceptions of their own behavior are sometimes different from what others see, but just because learning doesn’t show up in observable behavior doesn’t mean it hasn’t stuck, particularly with leadership and soft skills training where behavior changes may be subtle, and observed only by one or two individuals.
So – is it useful at the end of the class to ask your trainees how they intend to use what they have just learned in training? Definitely. While a few people may not accurately indicate what they intend to do to apply what they have learned, multiple research studies have found that for the majority of trainees, particularly in soft skills training, those who report their intention to transfer specific skills, actually do it.
In addition to asking about intent to use in end-of-class evaluations, here are some more ideas:
- Incorporate it with an action planning activity. (See this prior Sticky Note for a closer look at action planning and Making Training Stick®). Have participants develop their action plan, then a reflection activity on intent to do it.
- Email participants 2-3 weeks after training and ask: “What have you done as a consequence of the training?” and “….if you have not started yet, what do you intend to do?” Note: In one study, the trainer sent the follow-up email to each participant’s managing director, who then sent out the email. They got a very high response rate.
- Repeat the above email 6 weeks–3 months post-training. Ask the same questions and compare the responses.
- Large number of trainees? Develop a short questionnaire with multiple choice responses – no more than 5-8 questions. Each question would be a key learning point from the training, with response choices 1-5, and would have two parts: to what extent are you using this skill/learning point? If you haven’t use it yet, to what extent do you intend to use it?
- If it’s not possible to “boil down” to 5-8 specific questions, send more than one questionnaire. Just because the learning content is grouped into one learning event doesn’t mean the feedback and evaluation on it has to be.
Remember, when trainees tell you they intend to transfer what they have learned, they usually do it. That’s pretty cool!
Until next time…
As I suggest in these publications, “sticky objectives” should replace traditional instructional objectives for trainees and for their managers. Instructional objectives can be helpful for trainers to use for instructional design; however, before-training and beginning-of-training objectives should specify what the participant should know how to do and be able to do after training, in their job. These “sticky objectives” signal to the participant what they should do with what they are learning.
Transforming instructional objectives into sticky objectives usually involves just a few subtle yet specific changes in wording. The action verbs below which are linked with Bloom’s Taxonomy** application level can help to transform your instructional objectives into sticky objectives that your participants, their managers, and senior leaders see before, during, and after the training. Naturally, the specific wording will depend on the skills/information being learned.
Start by taking each instructional objective for your training. Delete “at the completion of this class” if this phrase or a similar one is there. Replace it with “In your job” or “on the job”. Then add an action verb from the list below, and complete the sentence with a description of the skill the participant should use on the job:
For example: Upon completion of this training In your job, you should be able to identify the most appropriate leadership style for a particular employee and use it to obtain desired results.
Note: In some cases you may want to customize some objectives for particular groups of participants. For example, in a management training program a group of supervisors on the shop floor may have slightly different objective(s) than a group of sales supervisors.
If you start with action verbs like these, the rest of the objective will usually fall into place.
Action Verbs based on Bloom’s Taxonomy application level
Special thanks to Julie at the Association for Iowa Continuing Nursing Education Fall Conference for suggesting this topic.
Until Next Time…
**P.S. Where did Bloom’s Taxonomy come from? In 1956 a team of faculty members at the University of Chicago under the leadership of Benjamin Bloom were seeking to help educators move beyond rote learning of facts. They developed a taxonomy, or levels, of learning. These levels of learning are frequently used in educational settings including workplace learning. The levels are: knowledge (recall), comprehension (understand), application (use), analysis (analyze), and evaluate (judge or assess). Bloom and his team also identified three domains of learning: cognitive (thinking and evaluating), psychomotor (physical and perceptual), and affective (feelings and preferences, values). For more information from a variety of sources, Google Bloom’s Taxonomy.
Donald Trump’s Trump University has been in the news lately. Allegations are that while the organization purported to be an educational institution providing “real estate secrets” from Donald Trump, in reality it was a sales organization focused on selling products and services. Several lawsuits are pending, and Trump’s political opponents point out that statements and claims by Trump himself about the “university” have proven to be false. Human resource development professionals take note: this seems to be simply a politically charged, high profile example of level 1 vs level 3 evaluation. They liked it, but could and did they use it?
Trump claims, according to a November 2015 article in Time magazine, that 98% of the students who took the $1500 real estate course gave it “rave ratings.” He has copies of the end-of-class surveys students completed to back up claims that students believed they got something of value in the classes. Attorneys for the plaintiffs, that is, students who took the class and are now suing, state that the survey results were not credible because they were not anonymous and were completed at the end of the class — level 1, in Kirkpatrick terminology. Trump’s lawyers have pointed out that many of the students filled out favorable end-of-class questionnaires and later “changed their minds”. They changed their minds about the value of the class when they tried to apply what was taught – level 3.
While most learning evaluations are not as high-profile as this one, it provides an opportunity to pause and take note:
- Do you conduct level 1 end-of-class evaluations of all classes? If you, as 97% of workplace learning professionals do, do you use them to guage the success of the class? How is learning success defined in your organization? Sandra Merwin, noted workplace learning consultant and now artist, did some research years ago that found a negative correlation between level 1 “likes” and level 3 “applies”. In other words, individuals who at the end of class did not give high “like” marks to a class at the end were more likely to apply what they learned. And those who did give high “like” scores after training were less likely to apply it. The explanation might be that when prior knowledge is challenged, it sets up an uncomfortable creative tension. Just one study, but something to think about.
- Consider surveying participants and perhaps their managers three months post-training to see how they’re applying what they learned. Doing this will likely provide more value than end-of-class, level 1 evaluations. My Sticky Note Easy Level 3 Evaluation provides useful tips for doing this.
- For completed classes with level 1 evaluation data available, conduct a level 3 evaluation and compare the results. Some LMSs can provide graphic comparisons. You may be surprised, pleasantly or otherwise.
- Add post-class reinforcement to increase application of what has been learned. Recent transfer technologies make doing this less time-consuming and easier for learners. For an overview of available technologies, see my Training Transfer Technologies white paper. My books Making Learning Stick and Making Elearning Stick provide evidence-based, easy-to-use techniques for both before and after learning events to increase application of learning.
While most learning evaluations don’t capture media attention the way Trump University has, the recent publicity provides a helpful reminder that our end goal in workplace learning is not for participants to like the training but for them to use it, to make the training stick.
Until Next Time…
Multitasking is rampant today – almost everyone does it at one time or another. I attended a workshop recently and soon after it began I noticed that at least one-third of the 100ish participants were tapping away on laptops, tablets and/or phones (yes, a few were using more than one device). I got up and peeked around. No one seemed to be taking notes on the lecture. They were reading and responding to emails or texts, posting on social media, and/or surfing the internet.
Is multitasking bad? Does it interfere with learning and retention? In study after study, test scores for people who multitasked in learning environments were significantly lower than for those who did not, or if scores were comparable, the multitaskers required considerably more time to achieve the same learning outcomes. One study found that people who sat close to multitaskers in the classroom had lower levels of learning even though they themselves weren’t doing it. Even worse, in one study multitaskers who performed poorly on a learning assessment believed they performed as well as their non-multitasking peers.
What’s a trainer to do? Neuroscientists tell us that most people’s maximum attention span is about 20 minutes. As attention begins to drift the temptation to multitask increases, and the digital device(s) beckon.
A trainer can and should vary the activities every 20 minutes: small group discussions, Q&A, application exercises, a video, and inward reflection activities are popular choices.
Most trainers ask participants to turn off phones at the beginning of training and to close out all unnecessary websites. E-learning designers should consider adding a slide with this message at the beginning of each online module.
Consider also these strategies based on current neuroscience research to reduce participants’ tendency to multitask:
- Tests and quizzes. Whether printed or on a slide, response required or rhetorical, studies show that when people are tested on material, especially if they know they will be tested, they have higher levels of learning and retention. Flash a few quiz questions on the screen or pass out a written quiz. Quiz features in most e-learning software makes this a snap. Be sure to make the test reasonably challenging and base it on the learning objectives.
Let participants know in advance they will be tested, or give a “pop quiz” without advance notice. People will soon figure out they are going to be tested periodically. Motivation to learn increases if testing results are recorded which makes the participant personally accountable.
Tests and quizzes need not be confined to the learning module. Require completing an after-class test as a condition for class credit. Or use a training transfer technology tool to deliver one question at a time. The newest revision of my Training Training Technology white paper has an overview of the latest tools.
- Flip the classroom. Do you really need to deliver the content via lecture? All of it? The temptation to multitask is most compelling during lectures. Consider instead: an online module, pre-reading, and/orvideo viewing ahead of time followed by in-class discussion, practice, and application. Of course a typical issue with pre-work is that some participants come to class without having done it. Guard against this after the first time by giving a test at the beginning of class or starting class with small group discussionthat draws on the pre-work content. Ask those who did not do well to spend their first break reviewing the material. (Have a few extra laptops and/or tablets available if needed.)
- Chunk content into smaller segments of 15-20 minutes. Obviously this will not work for every class especially if participants travel to get there, but when possible, shorter training chunks (think stand-up meeting or short online module) mean participants will pay better attention and therefore they will learn more in less time.
- Tell participants about the research on multitasking and learning. Ask them to consider how they and the organization will benefit when they achieve the outcomes of the class, and how their multitasking might interfere with their learning goals.
- Use available tools on the technology platform. For virtual instructor-led classes, be sure to use the tools available such as raising hands. Prepare a question or two to ask an individual when the system notifies you that a participant is inattentive or has navigated to another screen. For self-paced e-learning, review key metrics such as time on slide and navigation and consider ways to shorten/revise the content.
These strategies probably won’t completely eliminate multitasking during training but chances are that the more of them you use, you will reduce it.
Until next time…
I enjoy the TV sitcom Big Bang Theory. Apparently a lot of other people do too because reruns seem to appear often – on several channels in different time slots. Recently I saw a segment where Sheldon teaches – or tries to teach – Penny physics. His teaching is a great example of how not to engage a learner as well as many other mistakes some instructors make.
Click here to watch the exerpt of the episode. While you’re watching I hope you’ll think about what Sheldon should have done differently and what he could have done before and after the session with Penny that might have helped her apply her learning. Also think about your own teaching/training…. I know I recognized a few things that I have to watch and keep myself from doing!
When you’ve finished viewing, read my comments below the clip about how Sheldon could’ve applied aspects of my Training Transfer Process.
I recently went on a hiking vacation in Utah with a group of like-minded people. Our guide was a professional wildlife conservationist and geologist. As we hiked along trails, he pointed out many different species of birds and trees as well as geology formations. This information was interesting but a lot to take in as I concentrated on breathing and avoiding boulders on the steep paths. When we encountered something we had seen previously, he would ask us if we could recall the name or something about it. Sometimes he would also use the opportunity to ask us a few questions about other things we had seen.
By the end of the trip I was able to identify 5 different types of pine trees, 3 type of hawks, and many different desert mountain flowers and plants. Pretty cool….considering that my goal was to hike and enjoy the scenery and I really didn’t care whether or not I learned the names of the flora and fauna.
Why was I able to have such good recall? Our guide, a former teacher, used a technique called “interleaving.” This teaching/learning technique involves mixing up recall and practice nonsequentially. It is the opposite of “block practice”, where lesson, practice, and recall are done all at once. Think about it this way: If you want to teach 3 learning points, A, B, and C, a block practice session would look something like this: AAABBBCCC. An interleaved practice session would look like this: ACBABCB AC (randomized).
Numerous studies support the effectiveness of interleaving vs block practice to achieve long-term learning and retention. (Interestingly, many people in studies who used interleaved practice performed worse than their counterparts using block practice during the practice session, but performed better when tested at a later date.)
Here are some suggestions for how to use interleaving in your training. Detailed descriptions of how organizations such as Farmers Insurance and Jiffy Lube use it in their training are in the book Make It Stick (good title!) by Brown, Roediger and McDaniel.
- Begin your teaching – whether live, virtual, synchronous, or asynchronous – with a story or two that illustrates the application of several learning points.
- As you present each learning point, relate it back to the story and point out how it was used, could have been used, and so on.
- As the learning module(s) progresses and you go on to other topics, randomly ask questions about one or more of the previously-presented learning points and relate back to the application story.
- After the learning event (class or elearning module) is finished, continue randomly quizzing via occasional emails. For more information about after-training technologies to do this, see my white paper Training Transfer Technologies.
- Consider using hard copy aids such as flash cards (digital versions could be created in PowerPoint) to randomize quizzing.
While my hiking guide used this technique intuitively (I asked him), many of us should do it intentionally. I have made some adjustments to classes I have designed to accommodate this technique. I’d like to hear from other trainers who use who or are trying interleaving.
Note: The term interleaving is also used in computer disc storage. Here it refers to rearranging blocks of digital information on storage discs to improve speed of access.
Until next time…
Action plans have long been thought to support training transfer and to make the training stick. Actually the truth is….. yes, no, and …. sometimes.
End-of-training goals signal what is important, they provide a sense of direction, and they can be a focus for evaluation and feedback on performance of the task(s). But simply asking participants to jot down their goals, or what they plan to do to apply their learning, will have limited effectiveness. Also limited in effectiveness is setting difficult goals that are hard to reach. However, goals that focus on specific behavior outcomes can be very effective in producing on-the-job application of skills learned in training, particularly for complex skills. Also, when feedback is solicited from participants’ colleagues and participants know this will happen, action plans will spring to life.
Consider applying the following evidence-based enhancements to your current action planning segments. You will significantly increase how well participants transfer their learning. For additional ideas on action planning see my earlier Sticky Note.
- Develop specific behavior outcomes that identify skills to be applied. These behavior outcomes might replace the course objectives or they can be used to enhance them and make them more “actionable”. For related information on sticky class objectives, see my earlier Sticky Note.
Here are a few examples of behavior outcomes: There should be between 3 and 15 of them depending of course on the length of the class: “Gives employees feedback on a regular basis” (supervisor training), “Calls the customer by name” (customer service training), “Uses current discussion boards for current issues and solutions” (software or help desk).
- As part of the end-of-class action planning segment, provide participants with a list of the behavior outcomes that they should expect to apply. Either suggest that they choose among these for inclusion in their own action plans, or provide the action plan pre-completed with the behavior outcomes listed. Numerous studies have found that when action plans have specific behavior outcomes, participants are more likely to do them.
- Ask participants to supply the names/contact info for at least 2 coworkers, subordinates, or manager(s) they work with, who will be asked later for feedback.
- Survey these coworkers, subordinates, or managers 3 months after the training, using the behavior outcomes previously developed. Assure them the results will be confidential. Consider whether or not to share the feedback from the survey with the participant, and if so, how to protect the confidentiality of those who responded, particularly since there will be small numbers of respondents.
Online survey tools are helpful for providing anonymous feedback but may not be enough to assure confidentiality with a small number of respondents. Note: whether or not these individuals are actually surveyed may not be as much of a motivator as that the participants believe they will be surveyed.
The purpose here is to make participants accountable for applying the skills, but of course the information collected might be used for evaluation of the class or for follow-up training.
- Adapt for elearning in the following ways:
o List the behavior outcomes close to the end of the last module in the training.
o Provide an action plan form with behavior outcomes listed. Direct the trainee to fill in a date or milestone, and suggest that they print it for reference.
o Set up your LMS to send the survey mentioned above to participants’ managers (who are generally already loaded into the LMS).
o Consider developing an interactive action planning tool participant
Until next time…
Assessing whether learning is really being used on the job is challenging for many trainers. End-of-class level 2 evaluation is easy enough to do: the participant demonstrates skill or knowledge acquisition at the end of the training. We might also assess pre-training learning and compare. But determining whether skills and knowledge are actually being used on the job is another matter entirely.
The most straightforward way to do this is to simply ask participants if they are using what they learned and to what extent. But are we actually getting an accurate measure of whether a participant is using their learning? In a previous Sticky Note I mentioned neuroscience research that points out people often think they know, have experienced, or are experiencing something when in fact they have not. This illusion calls into question the practice of assessing on-the-job use of skills learned in training, by asking participants if they are using it. The key issues are:
- Are participants actually aware of how and in what ways they are using what they learned in a training class?
- Can participants distinguish between what they are applying from a particular class and what they are doing for other reasons such as another previous training, intuition or trial and error?
- How much are these self-reports tempered with wanting to provide the appropriate response, to please the trainer, the boss, the organization?
- Studies on these types of self-reports indicate they are unreliable. One study found that self-reports produced positive responses that were 35% more positive than reports by participants’ managers.
- In post-training reaction level 1 evaluation, ask participants about their “intention to transfer”, that is, whether they plan to use what they have learned, and how they plan to use it. Studies show there is a strong link between intention to transfer and later actual transfer.
- After training at a point in time when participants should have had an opportunity to use the training, ask their managers (a quick survey, or more detailed focus group) whether their employee is using the skills learned in training and how they are using them. Six weeks and three months post-training are popular times.
- To reduce the tendency to give the desired positive response, ask managers and participants specific behavior-based questions, known as a Behavior Observation Scales (BOS). Assessing on a 5 point scale (1 = Almost Never, 5 = Almost Always), specific behaviors linked to class objectives are addressed. For example, for a class on coaching, one behavior observation scale item is “Provides feedback regularly”. A BOS item for a sales training class: “Reviews individual productivity results with manager”.
- Instead of – or in addition to – asking participants and their managers, poll a select group of individuals, perhaps one level above participants’ managers,who are in a position to see many participants’ on-the-job behavior. One study paired an HR rep with each of these individuals, and the role of the HR rep was to assist the manager with completing the Behavior Observation Scales.
- Instead of assessing the learning application for every participant, assess a sample of participants. In general, 30% of the total number of participants should provide a reasonably accurate representation of all trainees in a particular training program.
Don’t rely on your “gut feelings” about whether trainees are using what they learn in training. Use popular, free survey software or features of your LMS to find out how much of your training is sticking!
Until Next Time…
PS: Join me at ATD (formerly ASTD) International Conference and Expo May 17-20. I will be presenting on Evidence-Based Techniques for Training Transfer.
I’ve been working recently converting training/learning objectives to “sticky objectives,” and I’d like to share a few thoughts with you. As most of us know, good instructional objectives are essential for effective training and evaluation. A couple of points about instructional objectives before I continue:
- A good instructional objective should include 3 things: 1) the performance (what the trainee should be able to do after the training, 2) the condition (when, ex. “when conducting a performance evaluation”), and 3) the criteria (how well). If the objective does not contain all three of these elements, it can’t effectively indicate the desired result of the training.
- Most instructional objectives are preceded by the this phrase: “at the conclusion of the training, the participant will be able to:”
- It may not be useful to share the instructional objectives with the trainees. It may be more helpful to develop instructional objectives for use in the design process with trainers, program sponsors, and other “insiders,” and to write and publish objectives that are focus on specific job performance, such as “conduct an effective performance review,” “use the 6 key functions in Excel,” and “use the Situational Leadership model to identify the appropriate mix of direction and support for an employee”…. for use with trainees in the learning events. In my experience, many trainees are intimidated or just don’t relate well with instructional objectives (some trainers too, but that’s a different issue).
- What is the purpose or point of the training and of the training objectives? Is it to demonstrate knowledge or a skill or possibly even an attitude change at the end of the training? In most cases, the answer to this question is “no.” The purpose of most training is for trainees to apply certain knowledge, skills or attitudes to their jobs so that their performance is more effective in specific, targeted ways.
So if the purpose of the training is for trainees to use certain skills in their job performance,
- The objectives should be written to describe what trainees should be able to do, on the job, after the training. If we look at instructional objectives from this perspective, the performance, conditions, and criteria may not change. What will change, though, is the statement that precedes the objectives: “at the conclusion of the training, the participants will be able to….” Instead substitute “in their job performance, the participants will….” Remember, our focus should be on what they will do, not what they can or will be able to do.Making these simple adjustments in the wording of instructional objectives – and in the more general objectives shared with trainees – can keep trainer and trainee focused on the true goal of the training – on-the-job performance.