Unknown examination dates
Debate Rounds (3)
Sidenote: I’m actually undecided on this issue (using the debate to make up my mind somewhat), but that’s not relevant to the debate.
Sidenote: I'd like this debate done before Christmas, so please be prompt in replying if possible.
Put simply, unknown examination dates refers to the idea of students being unaware of when their exams are. Con will be arguing for known examination dates, the status quo.
The main point of unknown examination dates is the reduction of studying, in particular methods such as ‘cramming’. The idea is simple. Tests are meant to assess student understanding of content taught in class. For simplicity we’ll define understanding as solid, long term knowledge and the ability to effectively apply what one is taught. Now, since studying inflates marks, but these marks don’t necessarily reflect increased understanding, the test’s accuracy in determining student understanding is undermined. Obviously if tests fulfil their purpose less it is a negative.
To demonstrate, I recount a simple discussion from my Aust. History class prior to my exam between a girl and my teacher. A revision sheet covering 2 terms worth of content had been given out.
Teacher: Emi, you don’t know anything on this sheet.
Emi: I know sir, but no need to worry – I have 2 days to study for the exam.
Teacher: Emi, I think you’ll need to study...
What this conversation shows is that the student doesn’t really understand anything, yet by studying they will gain temporary knowledge than can be utilised in a test to falsely display understanding. In contrast, unknown examination dates mean that understanding must be long term (as exams can occur over a longer period of time which necessitates understanding to remember it) .
In case this example isn’t clear, let’s compare exams with drug testing or random breath tests for alcohol. Clearly, if one is told the date of an investigation they will be prepared and more able to hide incriminating evidence. Effectively, forewarning people lowers the effectiveness of the police work. The same is true with exams as people will ‘cover up’ their lack of understanding if given sufficient time, just like drugs could be covered up.
The fact of the matter is that exam preparation takes up valuable class time. Often about 1-2 weeks are used on exam preparation depending on the subject, so we’ll call it 1.5 weeks. Given that exams occur twice a year this is 3 weeks per year, which given 6 years of high school makes for 18 weeks lost by exam preparation. This is essentially half a year’s worth of new learning that could be done that can be gained by having unknown exam dates. Obviously more learning which equals a greater breadth or depth of knowledge is beneficial.
Reduced Focus on Testing
Unknown examination dates would likely result in a reduced focus on testing, which is a good thing. Now, tests are indeed needed in education, and are quite useful for certain things, yet tests are quite flawed. The marking scheme is flawed (http://www.debate.org......) , test design isn’t advanced enough to properly assess understanding and so forth. In particular though I’ll focus on a common occurrence given a focus on testing.
Students often ‘data dump’ , which is to state that they choose to only study / remember what they know to be coming on their test. They will forget everything else, a negative with regard to the purpose of education which is to educate. Obviously, a student who knows less is worse than a student who knows more, all else being equal. Furthermore, unknown examination dates succeed in this issue as data dumping only occurs given study (where knowledge is prioritised due to upcoming exams) , which doesn’t occur under unknown exam dates (or at least not in the same short term sense) .
Of course, many would argue that a reduced focus on testing is a bad thing, not a good thing. Indeed, it is an important issue in a time where China is emulating Western styled education and the West is moving towards Eastern styled education system. Nevertheless, unless Con raises objections to whether a reduced focus is a good thing, this side issue won’t be debated that deeply (as it’s not the subject of our debate!).
I think I done enough for R1 to demonstrate some of the ideas behind unknown examination dates and why they should be implemented. This is due to many factors, including an increase in long term understanding, less test distortion, more time for learning and a reduced focus on testing. I look forward to Con’s response.
1. Students should know their expectations. I remember my first big surprise in my undergrad classes, when my professor highlighted all the information we needed to know throughout the semester for our exam. I asked him about this, because it didn't seem as though this was really challenging the class or getting us to learn. As an education professor, surely he knew better. He responded by explaining that the point of a test is to test what one knows, and the point of the teacher is to teach the relevant material. Now he didn't give us the exact questions or anything like that, but expectations were clear from the start.
Failing to know what an assessment will cover and failing to know the time frame expectation for learning are not conducive to effective teaching or learning.
2. Exam structure makes the exam, not knowledge of the expectations. This point plays very well from my previous one. My opponent is under the impression that what constitutes true learning is absent in exams where the day is known. If the exam is pure multiple choice, based on a study guide of exact questions handed out to students, requiring solely rote memorization - I agree. But teachers don't have to make that low-level type of exam. I would argue that the type of exam that can be memorized so easily is actually void of significant meaning whether you know the test date or not. On Bloom's Taxonomy of learning , this has to rank pretty low.
If, however, a teacher would give an essay exam that requires students to expound on themes discussed throughout the year, using terminology in the appropriate context, students could not simply "Christmas tree" their answers or rely on rote memorization. Regardless of how much you cram terms, a good exam will measure your understanding of them, not your memorization. If understanding is measured, knowing expectations (including the time of the exam) is a good thing.
3. Repetition and review are keys to learning . I took Chemistry once in high school, and once in college. Both times I struggled with the concepts. This year, my school needed me to teach it. Surprisingly, it was pretty easy to pick up in a very short time. This is not only due to my growth, but due to the key of exposure. It is very well known in cognitive psychology that the amount of exposure to something increases one's ability to remember and grasp a concept. That is why students who study more frequently for shorter periods of time, as opposed to cramming, will actually have a more firm grasp and recollection of their work .
4. Test anxiety is real . While no exam method will completely annihilate test anxiety, an unknown exam date certainly exacerbates the problem. When goals and expectations are straightforward, students are able to feel more prepared and train better over time. Reducing worry is vital to ensuring that what is being measured is student understanding, not emotional variance.
Summary: In summary, knowing specific goals is vital to appropriately teaching students. This does not preclude significant, higher level learning, as the learning does not come from being surprised, but rather from appropriate assessment. During the year and the final review session, teachers and students will have the opportunity to be exposed more frequently to information, as this is what will effect learning. Cramming would have no part in such a system, since it would prove to be a highly ineffective strategy. And as an added benefit to having a known test date, teachers will be supporting students who would otherwise show false performance results due to test anxiety.
I look forward to hearing my opponent's rebuttals. This should prove to be a very interesting discussion.
Before we begin, I'd like to thank Con for a well thought out response. However, like Con himself noted, his round was building his own case, not refuting mine. Now, while there are points where the cases cross and his building a case is fine, I remind readers that Con hasn't actually refuted anything of mine yet. Henceforth, my R1 points still stand.
I'll proceed to tackle Con's case by his own argument numbers. Then we'll address other issues.
1. Con appears to be a little off topic here. Con in particular states "Failing to know what an assessment will cover and failing to know the time frame expectation for learning are not conducive to effective teaching or learning."
Note though that students can know what an exam covers regardless of whether or not examination dates are known, making this a moot point. Students are still told what will be in their exams, merely indirectly. Ie. They know taught content is examinable, they know what's in their syllabus etc. Con seems to agree with me by stating "the point of a test is to test what one knows, and the point of the teacher is to teach the relevant material" .
Although elaborated on later by Con, here no link is demonstrated between knowing the time frame and 'conducive to effective teaching' - a non sequiter.
2. Con misconstrues my case slightly here when he states that I think understanding is absent given exam dates. I merely believe that known dates increase the absence of understanding, not that understanding is not present.
Also, Con states that any test that could be entirely memorised should be void of any meaning. I agree personally, but the issue is that these tests do have meaning, as set by other parties, be it employers, the government or the school itself. Henceforth, memorisation does have a form of meaning.
Con's final point here is that a good test will 'measure your understanding ... not your memorisation' . Unfortunately, tests are not always well designed. Furthermore, even if we assume the test is designed to assess understanding (which I stated was best practice in R1) , I demonstrated in R1 that memorisation can be used to falsely display understanding, at least for a short time. This short term knowledge is then later forgotten.
3. Con concludes this point with "That is why students who study more frequently for shorter periods of time, as opposed to cramming, will actually have a more firm grasp and recollection of their work." We agree that cramming is bad. Now, which system more effectively eliminates cramming? Clearly, one can't cram for an exam one doesn't know the date of, so unknown examination dates eliminate the issue of cramming and it's problematic consequences.
Con is actually discussing the 'spacing effect' found in psychology - http://en.wikipedia.org.... This effect details how humans more easily learn / remember things when studied a few times over a longer period of time (spaced presentation) as opposed to repeated study in a shorter timespan (massed presentation). However, and this is key - "the benefit of spaced presentations does not appear at short retention intervals; that is, at short retention intervals, massed presentation lead to better memory performance than spaced presentations" . SeeGreene, R.L. (1989). Spacing effects in memory: Evidence for a two-process account. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15(3), 371-377 .
The main issue is which system better discourages massed presentations. Massed presentations are bad for long term learning (we want students to remember for the long term, not the short term) , but we know that they are more effective in the short term, and hence potentially more effective for remembering test content. Clearly, massed presentation is impossible if an exam date is unknown (see footnote 1) .
4. Con acknowledges that test anxiety exists under any system. The question is whether knowledge of exam dates increases such anxiety. It must be noted though that there are no known cases of unknown examination dates, and so we don’t have empirical evidence to refer to on this point, meaning our discussion will be theory based and perhaps incomplete.
The main argument for why unknown examination dates lower test anxiety is that, quite simply, the time students are subject to anxiety is minimal. To put this simply, you can’t dread what you aren’t aware is coming soon. One of the biggest problems with anxiety is how people fuel their own doubts. By continually questioning themselves on their preparation and so forth, they sow the seeds of anxiety themselves. By not allowing one to sow the seeds one uproots the problem.
To quote Con’s own source (see pg. 271 for the quote) on this issue “there is evidence that high emotionality is associated with declining performance only [emphasis mine] when the individual is experiencing high levels of worry (Morris et al., 1981; Schwarzer 1984)” . Quite clearly, if one knocks out the source of the worry (dread mainly) then declining performance due to high emotionality is no longer an issue.
As Con stated himself, my R1 points weren’t contended and still stand. These alone provide strong grounds to affirm the resolution.
I countered Con’s first point by showing that exam content is still known (to the degree which it is told) under both systems. I showed how Con’s second contention was misdirected and flawed and that Con was falsely interpreting my position. I showed in my reply to contention 3 that massed presentations were less effective, and so by preventing massed presentations (through unknown examination dates) the better option of spaced presentation was encouraged. Finally, I showed how unknown examination dates actually lower, not increase, test anxiety.
I look forward to Con’s reply.
1 - Even if, by some feat of endurance, students tried to continually do mass presentations for revision out of paranoia, it should be realised that this would actually constitute spaced presentations as presentations are occurring at spaced intervals if we merely eliminate the repeated presentations (ie. Massed presentations; see the deficient processing view – (Hintzman et al., 1973)) on each day or week as they are shown to be less effective (check the wikipedia source for an explanation) .
Since Con's last post sums up both his affirmative points and rebuttals of my side, I will be systematically going down his list.
1. Pro asserts that I was off topic by maintaining that knowledge of expectations is important to student success. While I did mention that this was pertinent to content knowledge, which isn't as much a part of this discussion, part of the expectations are the time frame in which one is to know the information. As a very simple analogy, not giving students the expectations of a time frame would be like giving a construction company a job, but no time frame. In order to appropriately assess and prepare for a job, one needs to know expectations. Since we live inside time, content expectations in and of itself provide us with no meaningful context apart from a time frame to learn this content.
The problem here is that my opponent is still thinking of assessments as a very rote, systematic idea. But when we draw back from the notion of a paper and pencil, multiple choice test, which is what most higher level tests should be (except the occasional quiz to gather a quick read of the class). On page 35 of my cited source , the document says the following:
Engagement of students and effective classroom practices. Traditional forms of student assessment
and the constraints imposed by limits on time and other resources may place an inordinate influence on
the superficial acquisition of skills and facts or any one area in relationship to others. In this way,
education systems can gravitate toward readily measured outcomes, instead of more complex but also
more desirable outcomes, such as students being able to investigate, create models, or otherwise
demonstrate deeper content knowledge (McCarthy, 1994).
Notice the higher order thinking mentioned here, that is not the result of simple, easily measured outcomes. Rather, good assessments are "more complex," as they produce higher levels of thinking along Bloom's Taxonomy . But doing the higher order thinking like investigating, creating models, comparing and contrasting, etc, aren't typically things done in a one hour session of class, on the fly. They're done by gathering information, reflecting on the information with peers, being scaffolded by the teacher, and then producing a finished product. Good assessments are the product of a lot of time and goal orientation, not something which is thrown upon someone apart from both content and time specifications.
2. For his second point, pro seems to skirt around the real issue. He agrees with me that exams should measure true knowledge, not just rote memory. And I agree with him, that test's don't measure that. But notice here how the standard is the assessment (they should measure high level thinking). If tests don't align with the standard you are aiming for (high level thinking), you change the tests to accurately reflect the standard, you don't bring your goal/standard down to meet the way you test. Are some teachers still going to give bad tests that don't reflect higher level thinking? Sure. But we aim to go over the bar, not move the bar, despite the fact that we won't always make it. Failure is inherent in nearly any system, and it should not be a reason to lower the standards you're shooting for.
On a side note, my opponent rightly mentions that memorization is important to some businesses. So is art. But higher level thinking is predicated upon foundations such as memorization. If you only memorize, you won't have higher level thinking. But if you can think at higher levels, you must have already mastered the memorization (basic definitions of terminology) of that category to deal with it at those levels.
3. My opponent seems to miss the point that we're talking about different levels of understanding here. Being presented with information is great for memorization. So whether a student crams for a test the next day and gains the effects of mass presentation, or whether one gains memorization ability from spaced presentation makes no difference on the grade. Taking away known dates on exams won't automatically cause teachers or students to use the best practices available. And even if it did, students would take the same lame tests that assess rote memorization.
But when you have appropriate assessments, rote memorization (which is basically what you get from presentations) goes out the window. You can't just cram for the exam. You have to have a very familiar knowledge of the terms, which only comes from repeated, spaced presentations. Changing the dates of exams won't force institutional changes in procedure. Changing the assessments to higher level thinking will.
4. Even pro athletes feel nervous sometimes, before a big game. But for anyone who has experienced a similar thing in sports or academia, there is a difference between "Oh $^!#" nervousness, and "I'm nervous, but I got this. I'm ready." In cognitive psychology, the familiarity effect (or repetition principle, mere exposure principle, etc) is extremely well known . The more exposures one has to a particular situation and process, or anything else, the more comfort or liking is afforded to the individual. So while it may not cure all nerves, the practice of an even certainly reduces anxiety.
This relates particularly well to the notion of spaced presentations. With this concept, one is much more familiar with the information, and has more exposure. But does it also relate to known time? I think it does. If we gain comfort by exposure, familiarity, and preparedness, running exam day through our minds throughout the quarter/month/week can help one to have a set goal that has stability.
Con tries to counter by claiming that if students don't know the date, they have nothing to worry about. However, regardless of ignorance as to the exact date, students understand the inevitability of the exam. Rather than knocking out worry, it fosters it by making students ever wary of what is to come. I imagine it a bit like early sailors who thought they were going to fall off the edge of the earth. Even though some didn't know about hurricanes, rogue waves, or giant squid, as I do, going into the unknown is often scarier than knowing what you're up against. Take this last sentence or two as anecdotal, but I think the evidence I've cited supports it strongly.
I'm all for making exams more stringent, and free of the problem Con is trying to address: cramming and rote memorization. However, his solution is simply to make cramming harder to do. However, it doesn't make cramming obsolete, as if one were fortunate enough to cram the night before the exam, they'd still succeed. Furthermore, those who are good at memorizing would not be tested at a higher level, and would still be assessed at a low standard of learning. It is only by making the assesment live up to its standard of assessing higher level thinking that cramming truly becomes obsolete, students are actually tested on how well they can manipulate the material, and familiarity and comfort are afforded to all due to the teaching style necessary to successfully complete such assessments. I believe this type of learning is best used in a situation where test date are known, as this provides students and teachers with goals and stability, and will by no means affect such high level assessments in terms of outcomes due to massed presentation.
Thank you pro, once again, for an exciting debate.
Thanks to Con again for a thought out response.
Unfortunately Con has conceded some of my R1 points. While indeed cramming is a key issue at hand, as well as other issues like test anxiety, both of which Con addresses, there were other points to consider. Specifically, Con has not replied to my points on 'More Time' and 'Reduced Focus on Testing' , and I urge readers to reread these points. As any replies would be somewhat unfair given my inability to respond to new arguments (rebuttals are separate to new arguments; note that I’m not bringing up new arguments either in the last round) , voters should consider these points dropped, which means that if the rest of the arguments are even close then my unaddressed R1 points should be considered enough for the arguments vote.
Also, Con talks a lot about how even though my system makes cramming harder and less effective, but this doesn’t matter because we shouldn’t be giving out tests that promote cramming. Now, while I sympathise and agree somewhat with such ideas, there are 2 main issues. 1, we can’t presume better tests will be made. We’ve got to evaluate the merits of the system given the possibility of bad tests. We’ve got to evaluate using the present as a predominant factor. 2, there’s a small degree (yes, I agree with a far reduced emphasis on essentialism, but not a complete lack of it) to which straight out knowledge must be tested (like the ability to read, write, add etc.) , in line with essentialism http://oregonstate.edu... (to a degree). So, if knowledge needs to be tested then clearly there’s something that benefits from cramming in tests, or tests aren’t optimally designed.
I’ll now proceed to refute Con’s remaining, valid points, again based on his numbering system. I remind readers of my above points.
1. Con gives the analogy of a construction company with a job, but no time frame here with regard to the point about exam expectations. However, it’s clear that there are some time limits. Firstly, your yearly exams follow half-yearly exams. Secondly (at least for high school) , reports must be given out before the school year’s end, meaning they must be completed before the end of the year. Given they take weeks to do and so forth exams can only start up to x date, yet they can’t start too early or they can’t test all the content.
What I’m getting at here is that there at practical limitations that actually frame the exam time, merely that the exams aren’t set in stone. To be more accurate, it’s like giving the construction company a job and saying ‘Have it done after at least 6 weeks, but before 10 weeks are up.’ It’s not that odd to give a completion time period as opposed to a completion date.
Con then goes on to say that exams can’t test higher order skills because of time constraints. However, this is a misconception. For example, an exam can test a higher level of thinking by presuming the preliminary steps were taken in class (like finding out the problem, learning facts etc.) and then the final steps being taken in the exam. This way the time limit is circumvented by ‘adding’ time from regular class lessons to make up the requirements for higher orders of thinking.
2. I addressed Con’s point in my introduction. I want tests that test higher order thinking too, but we can’t presume that when evaluating test dates. You can’t just evaluate ideas within idealistic frameworks – one must evaluate within the current framework.
Here’s the sort of mistake Con is making:
“If tests don't align with the standard you are aiming for (high level thinking), you change the tests to accurately reflect the standard”
Note though that we’re not discussing test design! We’re discussing examination dates. In science one tests an idea by holding other variables constant and only testing a single variable. The same standard should be held here – we don’t change the tests and examination dates being known simultaneously (plus, this presumes tests will be designed better) .
On the sidenote of memorisation being a subset of higher order thinking skills, the problem is, that through poor test design and other means, that memorisation is given similar levels of marks to higher order thinking, which means that a rise in memorisation from cramming results in a false showing of higher order thinking. Remember, both myself and my opponent agree that unknown examination dates make it harder to cram.
3. Con is basically saying that if both massed and spaced presentation make the student do well on the exam day, then who cares? Well, we want students to remember what they memorise over the long term, and if spaced presentation does this better then we should encourage spaced presentation. Unknown examination dates only allows for spaced presentation, which results in greater retention of content.
Con then goes on about how rote memorisation is made useless by better assessments. Again, I must state that we can’t presume examiners will use perfect test design. I quote Con here:
“Taking away known dates on exams won't automatically cause teachers or students to use the best practices available.”
Precisely! Teachers won’t use ideal methods, as Con states himself. So why is he basing his case against how unknown examination dates make cramming harder by using an idealistic interpretation, which he acknowledges won’t occur?
4. I think myself and Con would agree to disagree here. Really, to resolve this point one would need empirical evidence and results of anxiety levels given known and unknown examination dates.
To briefly reiterate, Con states the difference between nervousness when one has had time to prepare and one hasn’t (see his 2 contrasting sayings) . Con fails to account for the fact that one experiences anxiety for longer given known test dates however. Furthermore, by knowing the test date one can get quite scared of the test and their own mind can increase their anxiety and paranoia. Indeed, I’m sure readers will remember a few times when really it’s not being anything besides one’s mind that has driven one to some form of emotional state. As I said, one really needs empirical evidence to clear this point up.
Conclusion / Voting
I think that I’ve given a solid case for unknown examination dates to be beneficial. I ask voters to vote based only on what has been said in this debate – I’m sure most people are quite opposed to unknown examination dates or have their own reasons against such a motion, but I ask you to please only consider what is said in this debate as this topic may otherwise invoke prejudice against my position.
Reasons to vote Pro:
• Con dropped most of my R1 points; these points are, by themselves without refutation, enough to secure a win given even a remotely close fight on other arguments.
• Con acknowledges that my system makes it harder to cram (which is bad for long term memory) – one is forced to use spaced presentation, which is proven to be better for long term memory – a positive
• Con keeps basing his responses on idealistic test design, which makes most of his responses somewhat off topic
This, combined with the fact that I don’t believe Con has actually won a point (we can’t prove the test anxiety point) showing why known examination dates are better than unknown dates is good reason to vote Pro.
I thank Con again for the debate – he’s put forth solid responses. Con has raised some key points about matters like test design and the like that readers would do well to consider in viewing education. It’s just unfortunate that not all these issues relate to unknown examination dates which is the issue at hand.
1. The round one arguments I supposedly dropped were the ideas that unknown dates would reduce time spent on emphasizing the test, and that a reduced focus on testing was good. First, I feel as though I was very clear about my view of time. I may not have put it in a big, bold heading, but I discussed even in my first post how review and repetition were vital keys to learning, and how this is well known to cognitive psychologists. All that talk we had about spaced versus massed presentation dealt with this issue of time.
On the second issue, my opponent's argument was once again addressed. While his heading argues for a reduced focus on testing, his argument below that heading is actually focused on the issue of data dumping, which we discussed to death all throughout our debate. In his last section under the reduced testing section, my opponent comes back to mention the issue of reduced testing, which had nothing to do with his previous paragraph's formulation of the argument, and stated that we weren't really here to talk about whether reduced testing was good or bad. His emphasis was that data dumping was bad, and this was a result of the emphasis on testing. The data dumping was what inflates the test scores and offends my opponent, not the "side issue" of a focus on testing.
2. I did say that pro's system would make it harder to cram. But, I also emphasized that this was a short term and shortsighted solution. Cramming would still work if students timed their studying just right. If the strategy of cramming still worked, flashing lights should go off telling you the problem isn't resolved. The tests haven't changed, and higher level, long term thinking isn't being tested. While I would concede to pro that his solution would most likely help reduce grade inflation as a result of cramming, the problem pro is trying to solve is not grade inflation in general. He is bothered by the lack of knowledge retention and tests actually reflecting long term, in-depth understanding. His solution doesn't even come close to addressing this, since the assessments aren't changed at all. Pro's proposal throws the idea of Bloom's Taxonomy and universally accepted knowledge of higher level assessments out the window for a quick fix on paper. But reducing grade inflation on paper doesn't increase intellectual retention.
Pro does try to save his system by saying that his will promote spaced presentation. However, pro is missing two key points. First, as mentioned above, he's not changing the assessment. Any exams taken will still only measure basic knowledge that is not reflective of long term understanding. Secondly, if pro is advocating spaced learning, he seems to agree that review and practicing for tests is a good thing, as spaced learning is an integral aspect of higher level examinations, not rote memory exams like he is putting forth. Or maybe pro is assuming these kids are going to go home and study every day. If he goes to a school where kids do that, sign me up to teach there.
3. I may be a bit idealistic, but that's the point of everything we do. When a mechanic fixes a car, his ideal goal is for it to work, and he's going to use the means he needs to make it work. As an educator, we know what works. Spaced learning, higher level thinking, goal orientation, and diverse assessments are vital to meaningful learning. Is every teacher going to teach appropriately and in align with these standards? Of course not. But what happens in the rest of the world when people know what to do to fulfill their jobs, and don't do it? Theoretically, they either get better or get fired. Not all teachers will teach correctly, and not all teachers will get caught teaching poorly. Some will suck for thirty years, until they retire. But I find it preposterous for pro to argue that I'm right about my ideals, but they're too good to be true. Things will never change if you don't hold people to standards and work to fix things, which is evident in pro's very proposal. He does not advocate solving much of anything, but seeks to minimize the damage of a broken system he aims to perpetuate.
In conclusion, changing the assessment to test higher level thinking is the only way to ensure that you're actually testing for long-term, meaningful knowledge. Pro's system doesn't test for the knowledge he desires. And where meaningful assessment exists, common themes of spaced presentation, known goals, and diversity are present. You just don't see that in the monochromatic assessments and teaching that revolve around current, multiple-choice, memory based assessments. Once again, I appreciate pro's desire to train youth so they have meaningful knowledge. I respectfully disagree with his solution, but I admire his vision.
2 votes have been placed for this debate. Showing 1 through 2 records.
Vote Placed by F-16_Fighting_Falcon 2 years ago
|Agreed with before the debate:||-||-||0 points|
|Agreed with after the debate:||-||-||0 points|
|Who had better conduct:||-||-||1 point|
|Had better spelling and grammar:||-||-||1 point|
|Made more convincing arguments:||-||-||3 points|
|Used the most reliable sources:||-||-||2 points|
|Total points awarded:||3||0|
Reasons for voting decision: See comments.
Vote Placed by imabench 2 years ago
|Agreed with before the debate:||-||-||0 points|
|Agreed with after the debate:||-||-||0 points|
|Who had better conduct:||-||-||1 point|
|Had better spelling and grammar:||-||-||1 point|
|Made more convincing arguments:||-||-||3 points|
|Used the most reliable sources:||-||-||2 points|
|Total points awarded:||1||3|
Reasons for voting decision: I was a little worried about how the con initially chose no to respond to the pro's counter arguments (a risky move in any debate which dragged part of this debate off topic, costing con the conduct vote) however he brought up the points about test anxiety, nervousness of an unknown exam date, and other arguments that were far more convincing than the pro's arguments. Very good debate :)
You are not eligible to vote on this debate
This debate has been configured to only allow voters who meet the requirements set by the debaters. This debate either has an Elo score requirement or is to be voted on by a select panel of judges.