Written by:

Holly Macdonald

Date:

March 8, 2011

How do you assess whether your informal learning, social learning, continuous learning, performance support initiatives have the desired impact or achieve the desired results?

I have found myself agreeing very much with responses from both Jay Cross and Clive Shepherd, but as I read their posts, I found myself wanting to know how we help “convince” the powers that be (TPTB) that informal learning is a viable option.  So, for me, the term assessment needed a finer point, I want to know why we are assessing.

Are we trying to…

  • track if someone learns something without a formal learning intervention?
  • link performance change to something – whatever that happens to be – some kind of learning – course/wiki/podcast/seating arrangement to encourage cross-pollination…
  • convince TPTB to include more investment in informal learning support?
  • make the case that we shouldn’t put people though a formal training course if they don’t need it?
  • change the balance of formal vs. informal in our organizations?
  • create more support in our organizations for non-formal interventions?
  • affect change in our industry?
  • Is it a performance level, program level or systemic level?

I guess, the term “desired results” is the heart of this question for me.

Here’s 3 desired results and my very high level response of how to assess if your “initiative” has acheived it…

  1. If your goal is performance driven, then you’ll want to indentify direct and indirect outcomes from changes in performance and come up with a realistic way of attributing it (think: based on the changes we’ve seen we think 20% of this performance can be attributed to the “intervention”).  This, to my unsophisticated definition is kind of like an accountant’s view – was it worth it?  What was your profit/loss on this?  This is the ROI stuff that I think most of the discussion is often about.
  2. As an ISPI groupie, I am very big on conducting a cause analysis and in many ways it would reduce the training (formal, or otherwise) because we could demonstrate that the cause is NOT a skills gap.  We could also measure it up front and show that you could save the organization money by investing money in a solution that will impact the desired results, not one that won’t.  Kind of a cost-avoidance situation, in financial parlance
  3. If you are more interested in changing your organization’s perspective, you might take a macro view, more like a financial adviser.  Here, you might show your “investment portfolio” (think: portfolio shows you are in a risk-averse investing pattern…and diversifying your investments will result in a higher return).

I would think that we should really be looking at many facets of assessment.  This is a big, messy, potentially juicy area to explore.  When you add in the learning vs. performance distinction, there are many ways to skin this proverbial cat.  For example, I signed up for LAK11 – Learning Analytics (Google Group is here – may be able to access more information through it) , because I was interested in seeing what we might do with assessment and analytics moving forward.  George Siemens wrote a post about it in 2010.

Note: As I went to link to Jay’s post, I see that he read Dan Pontefract’s post about his views of the “Kirkpatricks” (Dan’s post made me think they were some kind of learning mafia, like the Covey family, the Blanchards, the Robinsons, the Hortons etc) and Jay has written another post which expands thinking into measuring events vs. processes.  His posts are better than mine, but isn’t it great that through the BQ I can “converse” with real thought leaders like Jay?