I've had the (dis)pleasure of having supervised this event in a tournament context now, so I wanted to write about the experience. Please know that I last wrote about this event last spring
here before it was in rotation and largely stand by my thoughts there.
Nearly all of the Inquiry & Nature category events need work, but this event makes both myself as well as competitors extremely nervous. Technology-related impediments aside, it all returns to how the event was designed and how it is
scored. As it stands, there are quite a few peculiarities to the scoring rubric worth noting. First, like with Exp Design's rubric, point values are needlessly doubled from what they could be opening up avenues for extra subjectivity (which is best minimized). That aside, some more objective parts of the rubric are poor. Sound presence is far too heavily weighed. All prepared teams will include sounds (and their own or creative use of instruments if they seek full credit), but, for those that didn't bother learning straight from the rubric (and this is many of the top teams in Illinois--especially AA division ones--after my experience), an easy, say, 5pt goes to a team that includes any sound, not even if its inclusion is, at all, appropriate. I had one team that played an obnoxious ping nonstop. They earned an easy 5pt there for just doing it and my saying that it was present but with poor execution. A team that includes original sounds and employs them well could earn seven to eight points. Even if the game is sloppily written, plays averagely, and barely addresses the scientific topic of the day, this is enough to get ahead. That should not be. Rating of play balance and overall impression is absurd, too. Professional critics are hard to reach consensus, and here I am sending my team off into the abyss hoping that their event supervisor's ideas of what constitutes an original, appropriately operable game somewhat align with mine.
The most interesting bit of the rules is the science topic. I loosely interpreted the bit in section 2. about event supervisors providing a broad topic to mean chapter titles. Take texts from major categories of science, and copy the table of contents. To me, that lists a good set of broad science topics (after you cut out things like biotech, acid-base equilibria, and geologic time...no thank you). Anything more specific (friction, ionization, predation) is no longer broad and open to much game designer creativity, and anything broader (cells, chemical reactions, density) nearly defeats the purpose of having a topic at all. The topic I ended up choosing was cellular membranes and transport. It was broad enough to involve a wide array of (interesting!) game options while still providing focus. Now, that wasn't the loose part of my interpretation here. I gave participants a single-sided summary sheet about the topic in case they learned about it a long time ago and forgot or in case they've never studied it. I cannot assume other event supervisors will do this for my team later down the road, and that makes me nervous because this event is not assessing whether participants know a little bit about everything (hello, Five Star/TPS/Pic This). At any rate, even with information to ensure that the science topic was adhered to, I have never seen biology so butchered*. In the end, I did get a few impressive games in terms of the science but not nearly as many as I'd hoped. The science was shallow, and part of that was that participants do not have the time to design and build a good game, let alone one that addresses the topic in any way that's not completely cosmetic.
*Chemistry gets butchered regularly by participants in B division. If I have to hear about "H-C-eye" one more time...
Some technology difficulty was had for a few teams, but I suspect operator error for one of them, as they froze Scratch on two different computers within a short timeframe.
Anyway, this is the point where I dredge out my two wishes from that last post. On-site builds have merit, but 50min is not enough time for the demands of this event for the vast majority of participants. Sure, a scoring distribution will be had, but it's not a meaningful competition as-is; it's really shallow. I'd almost rather there be a limited amount of topics selected for the year for this or that level of competition. Teams pre-build their, say, four games (putting in as much or as little time as they feel they need, which goes for any building event) on the listed scientific principles attempting to build a game that adequately addresses the topic while including all of the necessary components of the game per the rules (you would be amazed how many teams did not have a UC sprite). These are brought in, and 1-2 are scored per supervisor choice released on the day of the competition post-impound. Something like that would accomplish the goals of the event better.
After awards, I was approached by a young man who looked distraught that he didn't win. He nicely asked me how he could have improved his game for next time. While I won't divulge the actual advice I gave him, what I was really thinking was 'You needed four more hours.'. I'm really at the point of "Game Off" on this game-off, and the sad part is that I may have to supervise it one more time this season.
In the interest of full disclosure, I did receive two games that I rather liked (the top scorer in each division). The top scoring game was the first place in junior varsity, where they gave me an osmosis maze where the cell tries to avoid lysing. The science was sounder than, well, everyone else (because osmosis is easy to model!), and the game played great. I was impressed and didn't get the chance to communicate that to the team. Noting that, even they could have done much better as far as the rubric was concerned.