Slarik wrote:The Eviscerator wrote:Slarik wrote: I may be remembering incorrectly, but I think someone said that in NC, they tell you the experiment instead of giving you a topic? Can anyone from NC clarify that?
In NC they've tried this different setup (just this year for regionals) where teams are given stations and each one has scenarios and asks about parts of an experiment, from dependent/independent/control variables, to hypotheses, to data collection, to conclusions. It's really quite bizarre, like a mix between Technical Problem Solving (which NC doesn't run for some reason) and regular Experimental Design. I personally don't like it that much, but it's only been tried once and I didn't do it.
That actually sounds kind of interesting. I wonder if Phenyl (can't spell the rest) would like seeing something like that. You are saying that basically they give you various experiments and have you write a hypothesis, or write the variables, or write a conclusion, but you don't actually have to do the experiment, (thus eliminating that not-very-useful and time consuming step that some people don't do anyway), right? Intriguing. Thank you!
Although I would be
furious if I got that event in competition – given that it's completely unrelated to the published rules, to the point that it's essentially a different event – it's potentially sort of interesting as a complete overhaul of Experimental Design... I mean, I guess if each station illustrated a particular experimental design challenge to figure out in some way (e.g., you can't collect data directly because of some kind of restriction, so how will you modify your procedure to solve this? Maybe you have to use a small sample size because that's all that's available, so how does that affect the analysis of your results? Given these past experimental results and this problem statement, what's a valid hypothesis? Or even, given these past experimental results, state one or more experiment you would do to further this line of research?).
I don't know if that's how they actually did it, but that's what I thought of upon reading your description. It might involve more understanding of statistics, and maybe a general idea of various data collection methods in different fields (they might have to limit the scope of the event to a general area of research, or else that could be problematic), but... I actually like this idea quite a lot (if it went into that kind of depth, as opposed to being junior high science fill-in-the-blank).
nejanimb wrote:Phenylethylamine, your long rant about ED would have been exactly what I would have said in complaint about the event right after I finished my SO career. Since then though, particularly after having taken a class this past semester called Experimental Design, I think I now disagree.
The way my ED team did the event was to, as you mentioned, always look for a quantitative IV and DV. I thought that setups that encouraged categorical designs were mistakes by the supervisors, and always looked for a way to shoehorn whatever materials and topic they gave us into that quantitative model. Like you, my team had something very near a template for experimental design, and followed it closely every time, and we were quite good at this scheme. This method was met with interestingly inconsistent results: we were always near the top at regionals, but never won; always won first at states; and never did too well at nationals.
Learning more about experimental design though, I've learned that categorical designs can be used properly and generate powerful results - having a qualitative IV does not, as I thought it did, indicate bad science.
Give me some credit, I did say
Phenylethylamine wrote:A categorical experiment, while often a valid and useful type of experiment in real life[...]
I don't object to categorical experiments in general (they still don't fit very well in the SciO Experimental Design rubric – although if a supervisor is looking for that kind of experiment, it may still score better than one that is better aligned with the rubric). However, I think one mistake kids often make in designing experiments is using qualitative variables when they could or should use quantitative ones ("when I add A to concentrated B, it turns red; when I add A to dilute B, it turns pink" tells you something... but if you have solutions of known concentration and a colorimeter, you can get more information and establish a quantitative trend), and including categorical experiments in the event now confuses that issue ("when I add A to B, it turns red; when I add A to C, it turns pink" is something very different; you can still use your colorimeter, but you're comparing apples to oranges – which is not inherently invalid, but does preclude establishing trends).
nejanimb wrote:And there are many different ways to handle categorical experiments, and many of the things I thought I was doing right in experimental design I really wasn't (misunderstanding how to handle standard error, improper sources of variation analysis, etc.). And I'm no longer convinced that a supervisor giving you strong hints about what experiment to do ruins the purpose of the event: for instance, with my final project for this experimental design class, I designed an exponent to determine the most delicious cookie recipe, given different flavors of chocolate chips, different colors of M&Ms, different heat settings on the oven, and a few other things (this is what you can all look forward to for grad level statistics classes!). Back in my SO days, I would have been horrified by that setup for an ED event (notwithstanding the issues of baking cookies in an hour) - now I understand that not only can very powerful design methods be employed here, but there are also many different ways to mishandle this seemingly straightforward experiment (and also many ways to do it well!).
That sounds fantastic. Actually, that sounds like a more rigorous version of almost all of my elementary school science fair projects (one year we made eight mini-batches of chocolate chip cookies, each missing one of the ingredients, and compared the results).
But isn't it still the case that if you want quantitative results out of that experiment (i.e., this cookie recipe is this much better than this other cookie recipe), you need to quantify deliciousness? You can certainly run your experiment categorically – if cookies with red M&Ms are tastier than ones with blue M&Ms, that tells you something despite the fact that "blue" and "red" can't be quantitatively related in this context – but unless you're content with a partial ordering, you would need to quantify the impact of the M&M color relative to the impact of the oven temperature, no? You can get the optimal temperature and the optimal M&M color without a quantitative measure of deliciousness, but if you're trying to figure out if a cookie with red M&Ms baked at 350ºF will be better than one with blue M&Ms baked at 450ºF...
nejanimb wrote:That said, I don't think ED is a perfect event as it is. A few things I think would help greatly:
- Allow for students to bring graphing calculators. Only permitting scientific calculators greatly limits the depth of statistical analysis students can do, and that really only detracts from the event.
- Significant changes to the rubric. There isn't nearly enough room give for holistic analysis of the experiment chosen, degree of difficulty, etc. In fact, I might suggest that there ought not be a published rubric that is supposed to be used for all competitions, and instead a selection of good labs can be published. Supervisors should have more discretion to dictate where the event goes. This is definitely scary and allows for the possibility of bad supervisors making things even worse, but there could be better instructional materials to fix that (a sample rubric, suggestions for supplies given, etc.).
- Allow for supervisors to, if they want, include a supplemental test. I've really thought about doing this anyways if I am given the chance to run ED at one of the VA events and making an announcement through state FAQs that I intend to do so. There are so many interesting things you can ask (and should be encouraged to learn!) about experimental design that can't really be captured in the current write a lab format. That does make the time crunch even bigger, but having even more ways to break ties or differentiate teams is almost never a bad thing.
Just some thoughts. I dont think people study enough for ED - I know I didn't. I did a ton of practice, but looking back on it I don't know that I ever even bother to do research on experimental design as a field, and didn't go learn a lot of new material about the subject. The event would have been better if I had a reason to have done so.
You bring up some interesting points. I do think the event would benefit from actually going into the statistics – since that's such a huge driving force in the design of actual experiments – rather than having statistics tacked on as a sort of afterthought.
Based on my experiences with Protein Modeling – an event with a very, very open-ended section (the prebuild model) for which event supervisors are provided with excellent training materials and open-ended rubrics directly from MSOE – I would have to say that this could be great, but that even with really good training materials, there will still be event supervisors that manage to take everything given at face value and totally ignore the open-ended aspect of the event (which disproportionately penalizes the top teams). Given that Protein is still one of my favorite events (largely because of this open-endedness), though, I'd say the good of such a scheme outweighs the bad by a long shot.
What do you think of this modified version of the event that was used in NC (or, for that matter, my interpretation of it, which might be somewhat different from how it was actually run)?