Re: Poorly Run Event Stories
Posted: February 18th, 2018, 7:21 pm
Yesterday, at SOUP Invitational, the room setup for the different events was inconvenient, which can be understandable due to the amount of space given within the college. However, the herpetology exam I took yesterday wasn’t only poorly run, but the test was a poor representation of the event itself.
Standing outside the event room, I was wondering why it started 15 minutes late. However, when I walked in, I could see why.
The test was out of 585 questions. The time per station was fair, but given the setup of the room (a lecture hall filled with seats with tiny tables that were attached to the seats), the space for writing and propping up a binder was much less than satisfactory. Additionally, the way the stations were set up made it difficult to find the next station, making teams lose more time while switching between stations. Since we were allowed to start as soon as we got to the next station, some teams may have started before others. Teams starting in a particular position wouldn’t have to run around half the room figuring out where their next station was located. A good way to fix this issue could be to make sure everyone started and ended at a particular time.
However, 15 questions per station (including multiple specimens from the list per station) is too many for one station. Furthermore, there was an extremely high percentage of trivia questions. Writers for Herpetology exams should ask more questions about the diet, habitat, and status of a specimen, not about the Greek god associated with a particular specimen or the name of a Chinese divination practice that used tortoise bones. If the top team doesn’t even score higher than a 25% on the test, that means the spread was much too low. Therefore, it’s not an accurate representation of a team’s skill level.
Another issue with the test was the first part, a 96 question section with both multiple choice and short answer. We were given 3 stations in total to work on it (7 and a half minutes). But, since the questions were in packets, the teams wasted time searching through the packet just to see where they last left off. Not only that, many teams likely didn’t finish over half the questions in that amount of time.
Although SOUP should be a relatively difficult tournament, this test went beyond that standard, though not in a satisfactory way. The number of questions, lacking setup (although it might not be helped), and types of questions weren’t an accurate representation of the event.
Hopefully the test writer will consider these comments as they write their next test or write the test for this invitational next year.
Standing outside the event room, I was wondering why it started 15 minutes late. However, when I walked in, I could see why.
The test was out of 585 questions. The time per station was fair, but given the setup of the room (a lecture hall filled with seats with tiny tables that were attached to the seats), the space for writing and propping up a binder was much less than satisfactory. Additionally, the way the stations were set up made it difficult to find the next station, making teams lose more time while switching between stations. Since we were allowed to start as soon as we got to the next station, some teams may have started before others. Teams starting in a particular position wouldn’t have to run around half the room figuring out where their next station was located. A good way to fix this issue could be to make sure everyone started and ended at a particular time.
However, 15 questions per station (including multiple specimens from the list per station) is too many for one station. Furthermore, there was an extremely high percentage of trivia questions. Writers for Herpetology exams should ask more questions about the diet, habitat, and status of a specimen, not about the Greek god associated with a particular specimen or the name of a Chinese divination practice that used tortoise bones. If the top team doesn’t even score higher than a 25% on the test, that means the spread was much too low. Therefore, it’s not an accurate representation of a team’s skill level.
Another issue with the test was the first part, a 96 question section with both multiple choice and short answer. We were given 3 stations in total to work on it (7 and a half minutes). But, since the questions were in packets, the teams wasted time searching through the packet just to see where they last left off. Not only that, many teams likely didn’t finish over half the questions in that amount of time.
Although SOUP should be a relatively difficult tournament, this test went beyond that standard, though not in a satisfactory way. The number of questions, lacking setup (although it might not be helped), and types of questions weren’t an accurate representation of the event.
Hopefully the test writer will consider these comments as they write their next test or write the test for this invitational next year.