Raising key questions, yesterday’s published quote of HECQO recommended testing of “soft” / “essential” skills (math, communication, problem-solving, critical thinking, teamwork…) upon entry and graduation to PSE. (Read news article at www.thespec.com/news-story/6327523-young-grads-need-to-brush-up-on-3-r-s-employers-say/)
The question or measurement is an important issue in Canadian PSE. So I ask two questions of the individuals within the PSE community whom I greatly respect: What do we really want to know? What would it mean to get measurement right?
First, Why would we measure?
I can think of two reasons:
- Learners period. Regardless of who sets the goals and processes of their learning, students care, educators care, and society cares. We may see success differently, but generally there is a continuation of learning, of growth in knowing and seeing, and a contribution to society through their gained ability to wrestle with big (and small) issues and tough (and daily) problems.
- Contributing our understanding of teaching concerns and successes is an important role of measurement. Do students really know “less math” now than 10 years ago when they arrive? Is there a shift in language and conception that mismatches with expectations? For those who like numbers, quantitative measurements can provide a more comfortable ground for addressing and celebrating. For those who like narrative but talk with those who like numbers, conversations become more possible.
Second, What are we measuring?
- Knowledge or Expertise: I may know a formula or close reading, and still not be able to apply it to a public debate about water sovereignty or creation of a new innovation. Knowledge (especially at the recall or near-application level) that might be tested is not a substitute for assessing thinking process, reasoning & decision-making based on knowledge, and the nuanced application of knowledge to a complex situation that requires recognition of deep features rather than superficial ones.
- Specificity: Are we sufficiently measuring the relevant breadth of knowledge when we measure numeracy, literacy, writing…(informational literacy…statistical literacy…)?
- Building on the work of others: How might the VALUE rubrics of AACU be adapted? What are the limits of measurement and the tensions of measurements identified in the UK, Europe, Australia, USA? What can we learn from colleagues coast to coast to coast?
“Knowledge” like measuring foot size for running speed
Fundamentally I think the question of measuring “knowledge” is misnamed and in the misnaming there is a risk of mismeasurement. “Application” is insufficient too, as is “job-readiness” for I don’t think we could be prepared for every task or job all of us do. Knowledge is likely related and likely necessary but is not sufficient for what we are seeking for our learners.
Instead, I think we are seeking the foundational concept of expertise in its many forms. We want learners to go from superficial snap trial by error that leads to incomplete or inappropriate answers -> to seeing the real (deep) features of an issue, to thinking through the process, making decisions about what matters and what to do, and then creating and applying in nuanced ways the ideas, facts, processes and capabilities of their disciplines. Whether it is to HR decisions, communication rhetoric, bridge spans, server backups, policy creations or critiques, society needs more from our students and us than simply knowledge.
I think we need to be wary here… Policywise there is a tendency for employers to blame and try to shift their training costs to universities and colleges, under an ideology that that is who should be training their employees. That is a relatively new ideological construct, and it is largely impossible to achieve. What we want students to be when they graduate from university is not an ‘expert’ as the only experts in universities are the professionals(who have been trained on the job for years) and the Faculty who are supposed to be experts in their individual fields, but let me be clear they are not necessarily supposed to be experts in (math, communication, problem-solving, critical thinking, teamwork) unless that is their specific area. Nor are any of them necessarily experts in teaching (math, communication, problem-solving, critical thinking, teamwork) unless that is their area. So If the assumption is that any student with any degree would necessarily always have those ‘skills’, that should clearly be a pretty false assumption.
Granted some faculty have these things in mind when they design their courses and curriculum, but that varies per individual and per discipline.
In regards to your question… students in the liberal arts should ‘learn how to learn’ so it is less important that they have the ‘knowledge’ than it is that they can acquire it faster than other people. This is where the workplace training aspect comes back into the equation. A university educated person should be capable of adapting to new and changing workplaces, not trained for a workplace environment. Should they know math? if they take maths, yes. Should they know how to communicate and write well? If they take courses in communication and writing, surely. And so on. If you hire a math major, should you expect them to be an immediately successful team player, when they may have spent their whole curriculum competing for grades(some math classes are highly competitive)? likely not. Can that person become one, perhaps with workplace training.
I very much agree that “learning how to learn” is perhaps the key “meta-capability” in a fast-changing world. Question is: how does a learner demonstrate that? How does an institution demonstrate that they’ve been a key factor in developing that in the learner? (This, to Carol’s question of “Why?”)
With respect, I have to question other things you say that sound to me like a siloed “knowledge specialist” approach to higher education.
Of course, employers have a responsibility to train workers in the specifics of the tasks they need to perform. There’s no way that a school can throw a student “over the wall” to be an immediately productive employee. (Granted, Work Integrated Learning programs such as co-op can help teach concepts in connected ways that are more quickly applied.)
It’s the tranversal, transferrable skills that employers are looking for. Canada’s Essential Skills Research Project is a recognition of that. Nine skills, all used in varying degrees across the world of work – from fundamental literacy and numeracy through working with others, etc. to “continuous learning” – your “learning how to learn”.
An employer needs to evaluate whether the “clay” of the applicant can be molded into a useful employee (and yes, they should get better at the molding part – simple immersion is not the best method). This is why they’re interested in transversal skills.
To use your Maths example:
For Sales, they don’t need math majors, but they need people who can calculate a profit margin.
In Research, they need people who know something about Statistics
In Graphics, they need designers who understand a few things about geometry
All of the above need Communications in some degree: the ability to listen carefully, speak appropriately, read and process information and write clearly.
And in some degree, the ability to solve their own problems and think critically (and ethically!)
All of these and other important skills (some beyond Essential Skills) fold into “learning how to learn”. I believe institutions can teach that. The question for me is, do they do it mindfully and can they demonstrate that their graduates have these skills and that the institutions have had a hand in developing them? Isn’t this what demonstrating Program Learning Outcomes and Graduate Attributes is all about, or is that just an exercise in compliance?
HEQCO’s testing solution is a first step which is pretty scalable. But it’s a blunt instrument and will leave a lot of questions unanswered. I can’t believe they aren’t contemplating other methods they already know about, such as portfolios (see Virginia Tech’s ePortfolio programs and Carol has mentioned AAC&U’s LEAP Rubrics ) and expanding the notion of the transcript to include things like non-credit graduate attributes (see IMS Global’s eTranscript and OBEE initiatives).
I’ve been mostly ranting about how and what, so to Carol’s “Why?”:
Because it’s important to students, who have some expectation of being prepared for the world of work
Because it’s important to institutions concerned with the quality and continuous improvement of their teaching and learning
Because it’s important to communicate the impact and importance of higher education based on more than anecdotes and statements of belief, especially in a time of reduced resource allocation
Institutions already have the means of demonstrating that Don. It is the degree and grades. That combined with the reputation of the school should establish that. That demonstrates that. if you want further differentiation then i might suggest that you find modes of countering grade inflation and decreasing the faculty/student ratio. It is pretty hard to teach ‘learning how to learn’ in classes that are even larger than high school classes, sometimes by multiples of 10x larger than high school classes.
This ‘useful employee’ isn’t the business of universities. The business of universities is higher education, which is now about 1/4-1/2 remedial education but that is a different issue.
I should note that the only institutions a testing solution will benefit and has benefited so far in other countries is…. the testing company, who makes a good deal of profit . This in my watchful eye has been the primary group advocating testing ‘consultants and testing agencies), not university professors, and rarely other businesses. \
Businesses want outcomes, as I addressed in my prior post, but they are wanting something that is already provided ‘capable workers’ or can’t be provided ‘workers trained in the specific skills of their workplace’. The latter they should train, the former universities already do well. Granted not every student is well trained, but see the above issue with faculty/student ratio and grade inflation.
I must admit that I am not familiar with the contemporary Ontario Secondary Education curriculum. But all he skills you mention in your ‘maths’ section I had in high school. My math training in university was Calculus 1, Calculus 2 w/ differential equations, Vector Geometry and Linear Algebra, various statistics and methods classes. So the question to me is ‘who is supposed to teach them those basic maths?’
Also I want to be very clear here, very very clear. You will not find a single university who is not rigorously and mindfully teaching as best as they can with the resources that they have. You might find a professor or two that has a bad year or more, but I have never seen a more mindful group of educators than I’ve met at universities in Canada. There are strong and rigorous reviews of programs, well structured systems of curricular integration w/review, great teaching.
As to your why?
Because it’s important to students, who have some expectation of being prepared for the world of work <— this is ideology that they are taught. Teach them what a university degree is and it will end.
Because it’s important to institutions concerned with the quality and continuous improvement of their teaching and learning <–yes, i agree, and we already have extensive and well designed systems to do this. Creating yet another one is creating a profit center for someone else, why do it? Oh yeah, for the profit.
Because it’s important to communicate the impact and importance of higher education based on more than anecdotes and statements of belief, especially in a time of reduced resource allocation <— it is… but maybe fix the reduced resource allocation. Maybe stop the devaluation rhetoric implied in your argument above and start valuing the systems and structures we've developed and work quite well. Maybe take the money that you would see invested in a new testing regime which will likely eventually be at least millions to billions, and invest it directly in the hiring of professors.
That's the thing Don… A new regime of evaluation like this won't solve the problems, it will just create new problems, which have to be resolved with more investment. I don't begrudge people the capacity to make money, but Ontario Universities are already strapped for cash and have significant labor shortages, any evaluative regime as you are suggesting doesn't serve them. And by not serving them, all you are doing is transferring costs and profit systems around. Fix the university repuation, labor and money shortage and you'll probably solve the problem.
Looks like we’ll have to agree to disagree on this one. I for one *am* interested in evidence of learning that goes beyond a degree, even one accompanied by syllabus and grades. AAC&U are also interested in this, hence their VALUE rubrics (mis-cited by me above as LEAP).
Important as they are, literacy and numeracy sit well down my list of “soft” skills compared to some of the other ones I named. Certainly K12 could be doing a better job of supplying students to HE who are adequately skilled in the fundamental skills. PIAAC is respected in literacy and numeracy circles, but I’m about as much of a fan of standardized testing as you seem to be. Here’s an excerpt from my initial response on the STLHE mailing list:
>> Standardized testing – really? Why are we considering this as the only
>> solution, just as the US is considering getting out of it? What is this,
>> “No Alumnus Left Behind”?
I’m primarily interested in the full gamut of soft skills, which a degree is supposed to support but doesn’t always demonstrate all that well, hence AAC&U’s efforts with the VALUE rubrics:
Inquiry and Analysis
Intercultural Knowledge And Competence
Foundations and Skills for Lifelong Learning
I’ll fade away with a final quote from a different HEQCO initiative that I have much more interest in, “Learning Outcomes Assessment: A PRACTITIONER’S HANDBOOK” co-authored by a stellar group of your colleagues:
>> While the development of learning outcomes has become embedded
>> in most postsecondary institutions in Ontario, effective
>> assessment of program-level outcomes is still a challenge for
>> many institutions.
Room for improvement at all levels: K12, PE, workplace. And measurement is important. It just doesn’t always have to be a test … of the things that are easiest to test. We’re agreed on that much.
I completely agree that “knowledge” is only one component of understanding and expertise. I frequently refer colleagues to “How People Learn” where the authors describe experts as people who have
1) deep foundational knowledge
2)…stored in a conceptual framework
3)…that’s optimized for retrieval and application.
I had the opportunity to share my version of this with Kimberly Tanner, a fantastic biology education researcher in San Francisco. She reminded me that anyone can memorize things. Expertise comes in the connections between ideas and in retrieving and applying the content. Here’s how I picture it:
Pingback: Open Badge eCredentials: Good Business for Higher Ed (Part 2) | Littoraly