Experimental Design B/C
Posted: August 4th, 2018, 11:37 pm
Smh you shouldn’t have been doing that in the first placedxu46 wrote:Big rules change...thoughts?
EDIT: also, we can't make up data anymore :/
I think it is a step in the right direction. Under the old system, teams were incentivized to begin writing the analysis sections etc. before all the data had been collected, which defeats the point. I also like the new checklist format better - it will reduce the number of math and totaling errors.dxu46 wrote:Big rules change...thoughts?
EDIT: also, we can't make up data anymore :/
I have heard that MIT in one of the more recent years was different. But in general, I agree. This sort of criticism of the event seems to have gotten progressively more common over the last few years.nicholasmaurer wrote:I think it is a step in the right direction. Under the old system, teams were incentivized to begin writing the analysis sections etc. before all the data had been collected, which defeats the point. I also like the new checklist format better - it will reduce the number of math and totaling errors.dxu46 wrote:Big rules change...thoughts?
EDIT: also, we can't make up data anymore :/
However, I am still skeptical of the event in general. One primary concern is there is often little to distinguish between the top teams. At the Solon HS invitational last year, the best fifteen teams were separated by less than 5 points. Top teams memorize a framework that covers all of the rubric requirements and simply modify this to fit a different simple experiment each tournament. If you read their XPD write-ups from different tournaments, they are remarkably similar and formulaic.
Therefore, I fail to see the intrinsic value of XPD with the current format; real scientific articles (and academic lab reports) vary much more widely to meet the needs of the experiment/topic/field. Because teams simply regurgitate similar write-ups each time, the current structure of this event does little to assess students' understanding of even the most basic concepts of experimental design: study types, biases, statistical analysis, statistical power/sample size, research ethics, etc.
I wouldn't remove XPD (which would require Inquiry to find a replacement) - I would reformat it. Rather than have teams perform an experiment, I would suggest breaking it up into sections.Unome wrote: I have heard that MIT in one of the more recent years was different. But in general, I agree. This sort of criticism of the event seems to have gotten progressively more common over the last few years.
(then again, do we really want whatever else Inquiry would give us instead?)
I agree that this is a good idea, and also according to soinc.org, the total amount of points is 110 instead of 106 like on the rules.nicholasmaurer wrote: I think it is a step in the right direction. Under the old system, teams were incentivized to begin writing the analysis sections etc. before all the data had been collected, which defeats the point. I also like the new checklist format better - it will reduce the number of math and totaling errors.
I completely agree with this point, although to me, it seems like the rules include the procedure for tiebreaks purposely vague so that competitors have to actually try to go above and beyond instead of doing a "fill-in-the-blank" style write-up and judges have to be able to tell which teams are obviously more skilled and have put more effort. I understand what you mean, and this could be remedied with more sections, although idk how teams are going to get more than what there already is done.nicholasmaurer wrote: However, I am still skeptical of the event in general. One primary concern is there is often little to distinguish between the top teams. At the Solon HS invitational last year, the best fifteen teams were separated by less than 5 points. Top teams memorize a framework that covers all of the rubric requirements and simply modify this to fit a different simple experiment each tournament. If you read their XPD write-ups from different tournaments, they are remarkably similar and formulaic.
IMO, this event is more about partnership, cooperation, and scientific procedure than actually experimenting. Like, don't we already know that if you drop a ball from a tall height, it will rebound? Also, teams are given only 50 minutes to write so much, they would need extreme teamwork abilities to finish the experiment and write-up. The new format for the event seems to emphasize this part, as you have to use your time wisely.nicholasmaurer wrote: Therefore, I fail to see the intrinsic value of XPD with the current format; real scientific articles (and academic lab reports) vary much more widely to meet the needs of the experiment/topic/field. Because teams simply regurgitate similar write-ups each time, the current structure of this event does little to assess students' understanding of even the most basic concepts of experimental design: study types, biases, statistical analysis, statistical power/sample size, research ethics, etc.
Only division C thoughJacobi wrote:I just realized that you can load the rubric into your programmable graphing calculator.
No more rubric memorization!