Several questions are of the type 'circle all the X in the sentence below'. Q16 Circle all the adverbs... Q23 Circle the connectives... Q42 Circle the preposition... Q44 Circle the article... This is how grammar was taught before the 1960s. The approach used to be called (after the Henry Reed poem) 'naming of parts'. I spent hundreds of hours in the 1980s and 90s, along with examiners such as George Keith and John Shuttleworth, running in-service courses where the aim was to move away from that kind of thing, and I really thought we were getting somewhere. The right question, in their (and my) view was not: 'Circle all the passives in the paragraph' - end of story - but 'Identify the passives and say why they are there' - beginning of story. This semantic and pragmatic perspective I eventually wrote up in my Making Sense of Grammar (2004). It was the way grammar-teaching seemed to be going, and I was delighted to see the message being put into practice in schools. Teachers would take students 'on a passive hunt' (we're going to catch a big one) - finding real examples around the school, in newspapers, and on the high street, and discussing what the effect was of using a passive as opposed to an active. It could be quite exciting - a word not traditionally associated with the teaching of grammar - and it certainly gave them a good basis for using (or not using) passives in their own writing. And now we have a test where it is enough, once again, for the students to simply 'Circle the passives'. Q3 in Paper 2: 'Which sentence is the passive form of the sentence above?'
The second thing that worries me is that some of the sentences to be analysed present students with problems because they ignore context. What would you do with Q1 in Paper 2, for example? 'A pair of commas can be used to separate words or groups of words and to clarify the meaning of a sentence. Insert a pair of commas to clarify each sentence below. (a) My friend who is very fit won the 100-metre race. ...' Of course, anyone with a shred of knowledge about relative clauses can see straight away that this sentence is perfectly all right without commas - depending on the intended meaning. It's not a question of clarifying anything. It's the basic distinction between a restrictive and a non-restrictive relative clause. In My friend, who is very fit, ... I have one friend in mind. In My friend who is very fit... I have more than one friend (the other one, who isn't very fit, nonetheless managed to win the egg-and-spoon race). Out of context the question becomes artificial and largely meaningless.
My third worry is that several questions ignore changing usage, and try to impose a black-and-white distinction where there is none. Take Q15 in paper 1: 'Which of the sentences below uses commas correctly?' The correct answer is We’ll need a board, counters and a pair of dice. The other examples all have a comma before the word and (the so-called 'serial comma' or 'Oxford comma') and are viewed as wrong. In the guidance notes to Q27 'Insert three commas in the correct places in the sentence below' markers are told 'Do not accept' the serial comma. Evidently Mr Gove, or his advisory team, does not like serial commas. In which case that's me failed, as I regularly use them. And most of Oxford University press too. But how can (how dare?) examiners ignore the facts of educated usage in this way? This is the ugly face of prescriptivism - defined as the imposition of unauthentic rules on a language - and it shows behind several of the questions in these tests.
One more worry: conflicting advice about basic grammatical terms. Take the important distinction between word and phrase. Q35 is 'Write a different adverb in each space below to help describe what Josie did'. This is actually a useful question, as it elicits creative thinking about how language is really used. But the test guidance notes say that adverbial phrases will be accepted, despite the question asking for an adverb. So, does that mean that anywhere a question asks for an adverb, an adverb phrase will be accepted? What is the correct answer, then, to Q16? 'Circle all the adverbs in the sentences below'. The sentences are: 'Excitedly, Dan opened the heavy lid. He paused briefly and looked at the treasure. The intention is obviously to get the two -ly adverbs circled. But if students were to take at the treasure as an adverb phrase of place (answering the question 'where did he look?') would they get their marks?
I could go on, and on... I found myself making comments of this kind on about two-thirds of the sample questions. I feel very let down actually, especially as I was one of those asked to provide some initial perspective, in 2011, and spent a worthwhile day (as I thought) discussing principles and examples with the government team tasked with taking these things forward. I left at the end of the day feeling optimistic. But my optimism, I fear, was misplaced. I hope things will change - and I especially hope that there are enough linguistically aware teachers out there these days to see the limitations in tests of this kind and continue with the more informed approach to language study that I know exists in many schools. There's nothing wrong with being able to identify adverbs as long as this is not thought to be the end of the story. It would be like giving people a driving test where all they had to do was name the parts of the car. With a linguistically informed approach, one can do this, yes, but then go on to drive the language, as it were, and take it to all kinds of exciting places.
19 comments:
That's depressing. It reminds me of the step backward the otherwise excellent Chicago Manual took when it turned its grammar section over to the deeply conservative Bryan Garner. I'm still in mourning over that. The arc of history is long...
I teach in China and all their exams are like this. They are highly prescriptive in what they teach and the way that they teach it.
The level of ambiguity on questions that lack a context is astounding.Questions are usually either multiple choice or gap-fill and I rarely encounter exam questions where there is only one correct answer and frequently ALL FOUR given answers are potentially correct. Occasionally none of them are. For the gap-fill format I have sometimes, in less than a minute, come up with twenty suitable answers.
I am baffled as to how students here ever manage to pass an English exam at all.
It's very disturbing that the UK now seems to be regressing to the same kind of thing.
It's not just second-language learners who get into trouble over this. My son (an American) had a high school Advanced Placement composition teacher who would take points off for every "passive" sentence in a composition. Actually, what she did was go on hunts for instances of "was" and "is", and would deduct points even if the sentences weren't passive. That was so far away from the idea that sometimes a passive is not only legitimate but preferred that I didn't know where to begin.
The SPAG test is trying to test academic knowledge of grammar as well as the ability to write and punctuate correct English. Perhaps it is valid to test basic terms like noun and adjective because they're a useful tool for foreign language learning and arguably part of a general basic education. But expecting 10- and 11-year-olds to distinguish a clause from a phrase, or a main clause from a subordinate clause, is too esoteric and the vast majority of educated adults (including their teachers) would struggle. I think the questions which use terminology to test language are fine (e.g. 'Choose the connective which makes best sense in this sentence.') but the ones which test the terminology itself are lazy (e.g. 'Circle the word in this list which is not a connective.') I suspect the prevalence of questions about terminology (i.e. ones that you could get wrong even if you wrote and punctuated English perfectly) is the result of poor test writing as much as deliberate policy. The DfE could be incompetent rather than misguided, or a subtle blend of both.
I very much agree about the terminology issue in relation to general education. The problem comes when people think that just because they have some terms their use of English will somehow magically improve. There needs to be a bridge (which I've talked about elsewhere, not least in my 'Making Sense of Grammar'). And the critical question, as you say, is one of age. When is the best time to introduce kids to the various grammatical notions? Very little is known about this, but my impression is that too much is being introduced too early, and without sufficient language awareness preparation.
I also wonder whether more advanced aspects of language awareness are dependent on a certain sensitivity to language which not everyone has and which is pretty impossible to teach. This makes them inappropriate for a universal test, in my opinion. It's like expecting all 11-year-olds to distinguish between intervals played on a piano.
Cases of clinical language disability aside, child language acquisition studies suggest that the kind of awareness needed as a foundation is pretty universal and early established (talking about 3-tear-olds here). I go into this a lot in the later chapters of my 'Language Play' (now available again on my website). The piano analogy is not relevant there, as - as it were - everyone can play. I don't know whether advanced knowledge is a gift of some sort. I suppose there must be some personality factors involved, otherwise everyone would be linguists!
Perhaps intuitive language awareness is universal (and part of a universal game) but not the ability to analyse syntax explicitly. When SPAG was first announced, my daughter's teacher at primary school helpfully put up a poster of adjectives. It included the word 'shone' in the phrase 'her eyes shone' - because it's a describing word and tells us more about the noun 'eyes'. Sure, there are other tests for identifying adjectives which 'shone' does not pass, but the point is, she lacks the sensitivity to know immediately that 'shone' is a verb. Because I spend a lot of time with people who have language-related jobs, it's easy to forget how alien it is for most people to analyse how their own mother tongue works.
When I worked in exam management there was much talk of VRIP when it came to test construction: Validity, Reliability, Impact and Practicality. I can therefore fully understand from a practical point of view why students are asked to circle the 'correct' answer rather than identify and explain. It's all about the marking. However, it seems that as a result, validity and impact have taken a back seat. If you can't balance practicality with validity, reliability and impact, then you shouldn't be testing in this way at all.
The most disturbing thing about this test is that it fails in the very thing it claims to improve: writing. My son, who is currently sitting his SATs exams, can pass a grammar test with flying colours, has very good oral literacy but hates writing so much it all falls apart on the written page. My (largely) EAL class demonstrate excellent memory for terminology but still struggle with grammar in context. Grammar out of context is like phonics without comprehension, mostly useless. I can ably demonstrate this as I have a modicum of phonic ability with Swedish and can decode a page. I would understand none of it.
I despair. I have long taught active/passive voice by asking children to take a passive voice descriptive paragraph about an animal and turn it into a storybook for small children by changing the passive to the active voice: 3rd person to 1st person, choose adjectives to modify nouns, add adverbs to verbs, create an alliterative name for the animal. I have never had a 10 year old who couldn't manipulate language in this way.
Cases of clinical language disability aside, I've never seen one either.
I agree with Ian that it's all about the marking. That's part of a general trend in exams and is partly a response to the growing number of results that are disputed. MCQ are easier to mark and harder to challenge (if constructed well) because of their lack of subjectivity. It does impoverish education though, because it inevitably has a washback effect on how the children are taught if everything is geared to passing the exam. Teenagers will have a shock when they leave school and find that real life is not multiple choice and that most big decisions are subjective.
I agree with ELTAuthor. I spent 25+ years teaching writing to college students. Most of them came in "knowing" a bunch of dysfunctional "rules" they had learned from previous teachers. Unteaching all that and replacing it with something usable was a subtle and advanced art that took me a lot of time to learn. Most teachers won't have or take that time, and some wouldn't be able to learn it even if they did, for the same reason that I could play pickup basketball my whole life and never be Michael Jordan or anything close. Jordan had a talent and a "feel" for basketball that very few people have, even if they follow the game closely, and a really good language teacher has a feel for how language is really used -- REALLY used, by good, effective speakers and writers -- that even most teachers just won't. It takes qualities of mind and character, like a certain pragmatic liberal-mindedness and close, empirical attention to what's actually present in excellent writing, that we just can't expect even from most professionals in the field.
Teaching and testing also require something else, especially if you're teaching classrooms of students and not just tutoring them individually. You need some way of at least partly schematizing the operations of effective language use, some way of organizing and categorizing them and identifying types -- the essential thing done with the subject matter of every field. This is where the impulse to name grammatical elements, parts of speech, etc., comes from. Unfortunately, although there are some better efforts out there than are usually taught, no one has ever been able to fully schematize what good speakers and writers do, at least not to the point where it can just be reduced to a system or set of algorithms that students can then just learn. (How I longed for such a thing.) This is why computers that can guide spacecraft to Neptune with incredible precision still have trouble reliably translating a sentence of French into English. So what we get, instead, is a fragmentary and inadequate attempt to schematize -- a few disconnected terms, exercises like "circling the passives" and so on that don't necessarily improve anything. Then, this tail wags the dog, becoming itself the point of the instruction instead of a means to the real goal of skilled expression. Yeah, it's like naming chords instead of composing music.
There's no single solution to all this, but I would suggest bringing whatever pressure one can on test-makers to design better tests, but also pressing them to adopt a general principle or "best practice," a kind of test of any language test. Borrowing ELTAuthor's suggestion, it would reject or at least bring deep suspicion on any test that an obviously capable speaker and writer might struggle to pass -- and, conversely, on any test that a computer could pass easily even though it can't write.
I think that Ian Cook has it here - it is about the practicality of crude assessment, and providing indicators of "progress". Undoubtedly the results from next year's SPAG tests will be 'better' than this year's proving ... rigour, progress, higher standards!
My only hope is that, "time's winged chariot" may help. When I started teaching we had the same issue with the Science tests where perfectly valid, in scientific terms, answers were deemed incorrect because they were not the answers in the mark book, and I believe that this was because many of the markers were not scientists or science teachers (oops was that an Oxford comma!).
As always with Gove's policies it seems that it is not about the learning or the child but about the Gove.
Several very valid points made here but we need shared agreement on the proposed outcomes of the test (and the teaching that preceded it).
Do we want children leaving school with a fairly sound knowledge of parts of speech, tenses, etc? If so, then asking them to identify the passives in a paragraph is potentially an appropriate question in, after all, a m/c test, isn't it? Naturally, we'd all expect that in the teaching that led up to this test, there was actually some discussion which explored the reasons why the passive is used as opposed to the active. However, explaining this particular area of grammar can be notoriously subjective too ...
Wrt the issue of teachers understanding what they are teaching, having time etc, this seems a bit defeatist to me. Highlighting 'shone' in the given phrase as an adjective reveals the slim grasp that particular teacher has of what an adjective is. But it's hardly something that should challenge people in this particular profession, is it? Is it sensitivity or simply knowledge? I would argue the latter. On a normal distribution of intelligence teachers are largely, I'm sure, well above the mean. My own children's teachers frequently misspell words and use apostrophes incorrectly. While I don't get upset about that, I do feel that it's something that needs attention - and that should, to a greater or less extent, have been addressed in *their* education.
Similarly, while I agree that 10/11 year-olds will undoubtedly struggle with main/subordinate clauses, one of the most common complaints of degree-level educators is that their students do not write in sentences. Again, it shouldn't be beyond Ts to be able to identify this error in their students' writing on a continual basis to make ss aware that this is inappropriate in more formal writing contexts. We shouldn't expect teachers to be rank in the top in the top 0.0000000001 centile wrt to their knowledge/use of grammar. (This is where Michael Jordan ranks/ranked in his field - and it didn't *just* emerge from his height genes. It came from thousands of hours of practice too.) We just want them to be able to teach ss to write effectively and, where appropriate, accurately, don't we? And although fluency in writing is, to me, paramount in primary-level education, we still want all students to be writing reasonably accurately before they leave compulsory education - whether they go on to university or not.
One of the music analogies is flawed for the same reason, I feel. I would argue that a knowledge of chords is highly useful when learning to play an instrument; the majority of hobbyist and even professional musicians will spend most of their time playing others' music rather than composing their own. And even the latter requires some knowledge too - pentatonic scales anyone?
The nub is, I agree, is the issue of VRIP which has long plagued testing in the mainstream - along with the washback on teaching, of course. And in an age where testing is held so central in education, this has never been more obvious. Does this mean we should be abandon all m/c or similar tests that are inherently highly practical but may therefore make compromises elsewhere? I don't think so. Will students emerge from school thinking life is one big m/c test? Unlikely, I feel. Even in mathematics testing - where one might expect the final answer to be sacrosanct - most modern assessment practices account *more* for the method than the absolute accuracy of the result.
Maybe I'm being naïve here, but isn't it a good thing that more Year 6 pupils will be able to identify passives, adverbs etc.? This will make it easier for them to study grammar more extensively later on (i.e. at secondary school). It's true that circling parts of speech is not enough for a good understanding of the English language. I just don't get how testing this at Year 6 is necessarily the "end of the story", as you say.
Secondly, you compare an exciting passive hunt activity with a dry exam question. I think this is a little unfair, as this isn't comparing like with like. Besides, the former can still exist alongside the latter.
I agree with the rest of the post, and was shocked that Oxford commas are marked as wrong.
Of course it's a good thing. The problem - and I've seen this repeatedly as I travel around - is that an awful lot of people do think it is the end of the story. The whole point of my Making Sense of Grammar is to show where the story goes next.
It's perfectly possible to have a test which examines functions and contexts as well as forms. What is wrong with the test is that it ignores functions and contexts.
Anyone really disappointed that the pass rate was so high? 63 to 76% for level 4, compared to 45 to 78% in maths and 38 to 70% in reading. Could this possibly reflect that actually most children did very well and this isn't the government's message!
If it was marked slightly harsher than the other tests, at a minimum of 50% all of my children would have passed. What a shame for the six children in my class I have to tell they didn't make the expected level.
Post a Comment