Why am I linking NCELP’s teaching materials to their learning goals? I mean other than the fact that to claim the two are not closely linked is – frankly – churlish, here are a ‘mountain’, indeed a ‘mass’ of reasons why.
NCELP itself – its whole raison d’être – is based on the principle that there is a relationship between what is taught and what is learned. Hence the wholly unproblematic assertion that “vocabulary teaching is a core component of foreign language learning” [emphasis mine]. Ergo NCELP’s teaching materials, including its vocab lists, can be considered to be a core part of what the intended learning is. NCELP has that posh word for “teaching” in its name, but it’s vocabulary work is written in the context of what it thinks students need to know, i.e. learn (hence the business with Zipfian curves in its slide shows, hence the preoccupation with high frequency words, priority, etc etc). NCELP itself says that we need to teach in a certain way in order that students learn the things that NCELP thinks they should learn.
If Paul Nation and others can make a career out of the relationship between input and uptake, then I sure as hell can make a tweet out of it. I asserted that what is taught in the NCELP ‘method’ is heavily related to what is learned, and this was met with a certain (and not unpatronising) derision. Yet I think anyone with the faintest interest in SLA will be comfortable with the idea that input relates to uptake. In other words, teaching relates to learning. The irony here is that early career teachers might struggle with this distinction between the two insofar as students don’t learn everything we teach them, or learn something different (i.e. incorrect, or personal to them). It’s rarely a problem that a student leaves a lesson having learned far more vocabulary than we taught them. Anyway, let’s be logical for a moment.
The only ways what is learned can exceed what is taught in a UK ML classroom setting would realistically be:
a) meaningful exposure to additional input outside of the classroom and directed homework tasks, or
b) additional words that are learned through exposure / implicit learning in the classroom
…so let’s take those propositions in turn, in the light of the NCELP materials.
a) Does NCELP envisage significant meaningful exposure to additional input outside of the classroom/homeworks? I guess this is the world of Paul Nation – extensive reading programmes whereby students would consume huge amounts of non-taught input and make steady vocabulary gains. I’ve seldom – if ever – come across this in a mainstream UK secondary ML classroom. And it would be a pretty big deal if NCELP were to be advocating this. And they haven’t mentioned this anywhere, so we can safely assume that this isn’t what they have in mind.
So that leaves b) is it envisaged that students learn more than what is in the specified vocab lists, through exposure within the teaching materials If this were the case, I’d expect to see teaching materials – texts, slides, etc – which were lexically different – richer, more heavily loaded – than the vocab lists themselves. I can’t see evidence of that when I compare lesson materials with the vocab lists. I guess I’d also expect to hear NCELP refer to the crucial contribution to learning made by incidental exposure: How is it supposed to done? Which words are they supposed to be? How is it supposed to be sequenced and loaded? How am I supposed to know whether it’s happening? NCELP does present its work as a comprehensive methodology, after all, with claims of excellence and research-led integrity. (Whether it’s these rather bold claims that are the problem, btw, rather than the materials themselves, which are definitely no worse than the rest of what’s out there, is another matter, for another debate, another day)
So if it is indeed intended that students learn more than what’s in the vocab lists, then:
i) it isn’t apparent from the input itself
ii) they don’t talk about it, so it isn’t clear how teachers are expected to make this assumption, or operationalise it, or create the conditions for it to happen
Given that there’s no trace of the intended incidental learning in the NCELP scripture, nor any obvious trace of it in the input itself, we can safely assume, for now, that the vocab lists are therefore a decent representation of the intended learning. These lists add up to less than 2000 words. On the “2k priority” principle (lest we forget, legacy input is criticised by NCELP for including rare words at the expense of more common ones), this would mean that learners should be learning no >2k words at all. But, I’m pleased to say, the vocab lists do feature words beyond 2k after all – at about a rate of 1 in 10. Which leads me to the question: which ones, and why? Are they arbitrary? Random? Personal fave? We are heading towards a GCSE with a defined vocab list (with receptive skills papers testing only words on this list as to do anything else would be, in the words of the Chair of the review, “unfair”). So what’s on this list matters, as they (? NB overlap between NCELP and close advisors to HM Government) are making unilateral and – apparently evidenced – judgements about which words make the cut and which don’t. This is, of course, one of the myriad reasons why Governments and awarding bodies around the world avoid definitive lists in the first place, and rightly so. You’ll be aware of the debates surrounding the usefulness of the AWL. Anyway, what’s also interesting about this is that the very organisation which says we should be picking words on the basis of evidence appears to say one thing, but do something else: pick some extra words – 10% or so – as they choose. Oh and by the way, the NCELP vocab list isn’t that dissimilar in frequency profile to the vocab lists in the ML textbooks already out there. 70% ish K1 lemmas; 80-90% K2. So I’m not entirely sure how we’ve moved on anyway.
Enfin bref. There is, of course, an elephant in the room. What’s holding English ML learners back in their communicative ability – let’s be clear – is not the balance of words they’re learning, it’s the quantity. So much ink and airtime of NCELP’s vocab work is based on which words to choose. But the real factor driving communicative ability is how much students know.
NCELP often refers to the 852 figure as the number of words known by UK GCSE students of French. It’s way short of the golden 2000 and way, way, way behind continental teenagers learning LOTE. The figure dates from a 2006 study by J. Milton. There was a 2008 study by A. David which presented an even bleaker picture. In fairness, NCELP states some vocab learning goals – 360 odd per year – which are somewhat beyond what most UK students currently achieve, and this is a step forward. But as I have shown, this goal still leaves us some way short of the 2000 threshold generally accepted to be a marker of gist comprehension and flexible communicative ability. Let’s not forget that many/most learners will probably not learn all the 360 words per year. Although I’d like them to, of course.
If NCELP really wanted to make a difference to communicative ability through the vocab route – which I applaud, given the weight of evidence linking lexis to proficiency and acquisition – then they’d be making it crystal clear just how important it is to properly stare the necessary vocab goals in the eye and spell out what this means. I.e. they’d be making a loud and vociferous case for a much meatier vocab outcome by GCSE. In the UK we say that GCSE = B1 (Ofqual). But take a look at B1 overseas – and the research which links B1 to lexis – and the B1 descriptors – and you can clearly see that for B1 you really do need to be at 2k words.
2000 words by year 11? That means that students need 444 words per year , based on a 4.5 year GCSE course. That means more than 12 words per week, every week, for 36 weeks, from year y 7 to y11. That’s only allowing for a meagre 3 weeks of assessments / time off timetable. That’s a very tall order, even for schools with the luxury of 2hrs MFL per week from year 7, which is increasingly rare. To say this publicly would go down like a lead balloon – as NCELP would be basing its method on a model which just isn’t viable in so many UK schools. But they also can’t afford to set a goal of less than 2000, because that would mean Government making a very public, rather awkward admission that you can’t have the fluency, spontaneity & authenticity to which GCSE pretends to aspire on a more realistic vocab goal that would actually work in schools. And it would mean a [Conservative] Government very measurably and obviously lowering standards. Not politically viable. There is a mismatch between the standard we say the exam is, and the vocab we’re prepared to say needs to accompany it.
So it shouldn’t surprise us that NCELP isn’t more forthcoming about an explicit vocab goal. But for everyone’s benefit, they probably should be. And it would move us on from this convenient side-show about the top 2k. A side-show which will get us nowhere, fast.