builderguy135 wrote: ↑Sun Jun 06, 2021 1:47 pm
Event Feedback
For each event, I'll be ranking it based on difficulty (with higher being more difficult), length (with higher being longer), and overall (with higher being better). Difficulty and length are more of my personal opinions about the event, and they don't necessarily reflect the overall quality of the test - I prefer long and difficult tests, but that does not mean that easier and shorter tests cannot be good quality.
Next, I'd like to note that I only seriously took 6 of these tests - I obviously don't have any experience with most events on the event slate. Thus, I'll be including these events at the bottom, and I won't be rating them, as I can't reliably tell the difficulty and quality of the test.
With that being said, here are my event reviews:
Circuit Lab:
Difficulty: 2, Length: 8, Overall: 4
I used this test to study/prepare for the AP Physics tests. While I obviously don't have a lot of circuit knowledge, I could tell that many questions were somewhat unoriginal, and the questions were all surface-level in terms of the depth of understanding needed to answer the questions correctly. An extremely large portion was a history portion (name the scientist, etc.), and two questions that stood out to me were questions near the end of this portion. These questions asked about your (the competitor's) opinion about which scientist from a list was the most influential based off of their developments in a certain area, and to include specific examples to back up your reasoning. Not only were these questions extremely overweighted (10 points), but they were also completely subjective, which is not what you want in a good test. Questions must be clear and to the point. Asking the competitor to write a paragraph about history in a science exam isn't what we want to be doing. I would personally change this test by making point values more even (no 1 and 50 point questions in the same test please!) and remove all subjectivity from questions to make sure they are clear. This test has a lot of potential, but the glaring issues in point values and unclear questions make the test somewhat frustrating to take.
Codebusters:
Difficulty: 5, Length: 8, Overall: 6
I wrote the bot that made this test, so obviously I'm slightly biased, but I thought this test was slightly above average. The lack of aristocrats, coupled with the easy patristocrats, are what made me give this test a 6/10. I believe this test had 4 aristocrats and 3 patristocrats - last time I checked, this is not a good balance. There should be at least double, if not triple the amount of aristocrats than patristocrats. For a one-person test, 3 patristocrats are far too many, since the test Codebuilder generates are made for three people at a national level. Overall, I would want make the question balance better (more aristocrats, less patristocrats), and much shorter (15 questions are more reasonable).
Edit: It turns out that the person who generated the test used preset 1, which is meant as a debugging preset…
Cybersecurity:
Difficulty: 3, Length: 6, Overall: 6
I thought this test was great in terms of topics covered, although some of the question quality could do some work. With this test being open-internet, I was expecting many questions to be more difficult, and I thought this test was better suited for non-open-internet than open-internet. In regards to the length, i only put at a 6, despite it being the longest SMEC test in terms of # of questions - there were many questions that were rote memorization, which took no more than a few seconds. One other suggestion I'd make is to make the questions more clear; there were some questions that didn't make any sense/were incorrect. With that being said, I do appreciate that the test covered almost every topic in-depth, and it is clear that the exam author put a lot of effort into writing the test.
Felinology:
Difficulty: 7, Length: 5, Overall: 8
Great test. I don't regularly do bio ID events, but I thought the question quality was great, requiring more than a simple Google search. Questions were difficult even for an open-internet exam.
Sounds of Music:
Difficulty: 8, Length: 9, Overall: 9
This test was easily my favorite out of all of the tests I took this month. The test was divided up into topics, with question difficulty increasing towards the end of the topic, which gave an extremely good spread of easy/medium/hard questions. Having 100 questions, some with several parts, I obviously wasn't able to finish, but I was able to take a look at the entire test when guessing all of the multiple choice questions in the remaining few minutes. Almost all of the questions were in-depth; there were some questions that were just "plug-and-chug", but most required a deep knowledge of the topic. The music theory section was one part that stood out to me - it was clear that the exam author was very knowledgeable about the theory. Questions were weighted very well, although I'd argue that some were slightly overrated - some 4 or 5 point questions could have been reduced down 1-2 points, and similarly, some questions could have been worth slightly more. With that being said, this was a great test, and it's definitely a great resource for competitors when the event rotates back in.
Write It CAD It:
Difficulty: 8, Length: 9, Overall: 6
First of all, I'd like to thank Skythee/kh.aotic for being a great WICI partner! While this was the first time I've done WICI, I appreciated the last minute cram session and advice from JC, so hopefully we didn't do that poorly. In regards to the test, I found it very difficult. Obviously, it was impossible to finish, but one suggestion I'd make is to increase the amount of "easy" pieces, which would make a better distribution of scores. With that being said, this event was fun, and the instructions were very clear, which is why I gave it a 6/10.