Page 4 of 6

Re: Experimental Design B/C

Posted: December 10th, 2019, 9:40 am
by TheMysteriousMapMan
IHateClouds wrote: December 10th, 2019, 7:28 am
MissAmagasaurus said:
I can't help you with 2 since I don't do the math sections, but I can help with 1!

A controlled variable is something in the experiment that you are not changing, but you could for a different experiment. For example, if you were doing an experiment on dropping a ball from different heights. One of your controlled variables could be the weight of the ball, since you aren't changing that in this experiment. The weight of the ball can be changed in a future experiment though.

A constant variable is something that does not change and you most likely will never be able to influence it. I usually use earth's gravity as a constant.

As a simple way to think about it, think about it like this. "What can I change in a related experiment" vs "This variable can not be changed."
i interpret this differently? based on the way the rules were done this year, it looks like the controlled variables are the constants, but the control is the standard of comparison :?:
IHateClouds, I would argue that CVs and constants are not the same (they are scored differently in the rubric, for this reasoning, with the example of dropping something with a parachute attached:
CV (Controlled Variable): Something that you can actively change, but you are setting it at the same value across all trials for the purposes of the experiment (i.e. you could consistently change the value in another experiment). This would be like drop height or the mass of the object you drop.
Constant: Something that is by nature unchangable, and thus has to be a particular value. This would be like gravity --> It is still consistent, but you can't actually change it.
Control/SOC: A set of trials that is used to provide a baseline. This would be like just dropping the object without a parachute in order to provide a reference point.

This is how I interpret things, so I could be wrong, but I would definitely think your CV and your constants are different since they are scored differently on the rubric.

Re: Experimental Design B/C

Posted: December 10th, 2019, 12:02 pm
by IHateClouds
TheyMysteriousMapMan
IHateClouds, I would argue that CVs and constants are not the same (they are scored differently in the rubric, for this reasoning, with the example of dropping something with a parachute attached:
CV (Controlled Variable): Something that you can actively change, but you are setting it at the same value across all trials for the purposes of the experiment (i.e. you could consistently change the value in another experiment). This would be like drop height or the mass of the object you drop.
Constant: Something that is by nature unchangable, and thus has to be a particular value. This would be like gravity --> It is still consistent, but you can't actually change it.
Control/SOC: A set of trials that is used to provide a baseline. This would be like just dropping the object without a parachute in order to provide a reference point.

This is how I interpret things, so I could be wrong, but I would definitely think your CV and your constants are different since they are scored differently on the rubric.
Yeah I definetely agree with you! I tripped over my words in my explanation...
Thanks for clearing that up!

Re: Experimental Design B/C

Posted: December 10th, 2019, 12:47 pm
by TheMysteriousMapMan
IHateClouds wrote: December 10th, 2019, 12:02 pm
TheyMysteriousMapMan
IHateClouds, I would argue that CVs and constants are not the same (they are scored differently in the rubric, for this reasoning, with the example of dropping something with a parachute attached:
CV (Controlled Variable): Something that you can actively change, but you are setting it at the same value across all trials for the purposes of the experiment (i.e. you could consistently change the value in another experiment). This would be like drop height or the mass of the object you drop.
Constant: Something that is by nature unchangable, and thus has to be a particular value. This would be like gravity --> It is still consistent, but you can't actually change it.
Control/SOC: A set of trials that is used to provide a baseline. This would be like just dropping the object without a parachute in order to provide a reference point.

This is how I interpret things, so I could be wrong, but I would definitely think your CV and your constants are different since they are scored differently on the rubric.
Yeah I definetely agree with you! I tripped over my words in my explanation...
Thanks for clearing that up!
Glad I could be of help!

Re: Experimental Design B/C

Posted: January 3rd, 2020, 4:03 pm
by prickly.pear
Do the units for the quantitative data and material list have to be Metric, or can they be Imperial?

Re: Experimental Design B/C

Posted: January 3rd, 2020, 4:39 pm
by Scrambledeggs
Okay so I got put in this event for an invitational, any recommendations of how to prepare?

Re: Experimental Design B/C

Posted: January 3rd, 2020, 8:04 pm
by SilverBreeze
Scrambledeggs wrote: Okay so I got put in this event for an invitational, any recommendations of how to prepare?
The first priority would probably be to memorize and understand the rubric, although I am not a expert in this event.

Re: Experimental Design B/C

Posted: January 7th, 2020, 8:45 am
by terence.tan
on the offical scioly website, the experimental design reporting packet has been updated. I cant seem to find what has changed

Re: Experimental Design B/C

Posted: January 8th, 2020, 5:13 am
by glin1011
Hi all! I don’t know if someone’s asked this question yet because I’m still a bit new to the forums, but I’m a student coach for Experimental Design Div B and a data analyst for our Div C team but I’m kinda having trouble doing/describing the C.E.R. format replacing the Data Analysis section in part 2 of the rubric.

Can anyone help break it down for me in simpler terms? I understand how to point out specific data points and outliers in your data, but I don’t know how to really place it properly in the new reporting packet format, and it’s been really hurting my Div B teams massively during their competitions.

Thanks so much in advance!

Re: Experimental Design B/C

Posted: January 8th, 2020, 10:55 am
by SPP SciO
glin1011 wrote: January 8th, 2020, 5:13 am Hi all! I don’t know if someone’s asked this question yet because I’m still a bit new to the forums, but I’m a student coach for Experimental Design Div B and a data analyst for our Div C team but I’m kinda having trouble doing/describing the C.E.R. format replacing the Data Analysis section in part 2 of the rubric.

Can anyone help break it down for me in simpler terms? I understand how to point out specific data points and outliers in your data, but I don’t know how to really place it properly in the new reporting packet format, and it’s been really hurting my Div B teams massively during their competitions.

Thanks so much in advance!
EDIT - Before reading what I wrote below, this was posted as a FAQ a few days ago:
ON THE EXPERIMENTAL DESIGN CHECKLIST DOES STATISTICS CLAIM REFER TO CLAIMING WHAT THE STATISTICS ARE, OR WHAT YOU ARE DEDUCING FROM THE STATISTICS, OR CHOOSING WHICH STATISTICS TO BASE YOUR TREND AND INTERPRETATION OF THE DATA?
A claim is an assertion of the truth of something, typically one that is disputed or in doubt. You will then provide evidence, in this case statistics, that back up your claim. In the reasoning section, you will explain how the statistics back up your claim.

I find this section frustrating, since it is worth a large amount of points, but it seems redundant. Also the way it's worded can be a little tricky - usually I've seen the CER model applied to an entire experiment, rather than just the statistics section. Here's my (unofficial) take on it -

This year's rubric is an evolution from previous years, where section J read: "Analysis and interpretation of data." Now, the section says "Analysis of Claim/Evidence/Reasoning." I think it would be more clear if it said "Analysis of Data: Claim/Evidence/Reasoning" meaning the Claims are all about the data, not about the entire experiment.

For example, a Statistics claim may be as simple as "Our calculations for best fit/mean/median/mode are accurate." The Statistics evidence may be, "We conducted a total of 10 trials." The Statistics reasoning may be "A sufficiently large number of trials is required for statistical calculations to be considered accurate."
Outliers claim: “There are no outliers in our data.” Outliers evidence: citing the Q1/Median/Q3 data, Outliers reasoning: explain the 1.5 IQR rule.
Data Trend claim: “We would expect X to continue to increase as Y continues to increase” Data Trend evidence: cite some numbers from the data, Data Trend reasoning: “Our data suggests a direct relationship between variable X and Y”

As far as the difference between 0, 1, or 2 points is concerned, I think there’s a little subjectivity there. If I’m grading this section, I’m giving 0 to a blank response or something unrelated to the data, 1 point to a logical statement about the data, and 2 points for being both clear and complete.
I hope this helps, and if someone more experienced notices something wrong, please point it out. The scoring rubric explanation on soinc.org is currently outdated. I wish there was a “sample” lab report provided, in the 2020 rules format.

Re: Experimental Design B/C

Posted: January 9th, 2020, 12:42 pm
by glin1011
SPP SciO wrote: January 8th, 2020, 10:55 am
glin1011 wrote: January 8th, 2020, 5:13 am Hi all! I don’t know if someone’s asked this question yet because I’m still a bit new to the forums, but I’m a student coach for Experimental Design Div B and a data analyst for our Div C team but I’m kinda having trouble doing/describing the C.E.R. format replacing the Data Analysis section in part 2 of the rubric.

Can anyone help break it down for me in simpler terms? I understand how to point out specific data points and outliers in your data, but I don’t know how to really place it properly in the new reporting packet format, and it’s been really hurting my Div B teams massively during their competitions.

Thanks so much in advance!
EDIT - Before reading what I wrote below, this was posted as a FAQ a few days ago:
ON THE EXPERIMENTAL DESIGN CHECKLIST DOES STATISTICS CLAIM REFER TO CLAIMING WHAT THE STATISTICS ARE, OR WHAT YOU ARE DEDUCING FROM THE STATISTICS, OR CHOOSING WHICH STATISTICS TO BASE YOUR TREND AND INTERPRETATION OF THE DATA?
A claim is an assertion of the truth of something, typically one that is disputed or in doubt. You will then provide evidence, in this case statistics, that back up your claim. In the reasoning section, you will explain how the statistics back up your claim.

I find this section frustrating, since it is worth a large amount of points, but it seems redundant. Also the way it's worded can be a little tricky - usually I've seen the CER model applied to an entire experiment, rather than just the statistics section. Here's my (unofficial) take on it -

This year's rubric is an evolution from previous years, where section J read: "Analysis and interpretation of data." Now, the section says "Analysis of Claim/Evidence/Reasoning." I think it would be more clear if it said "Analysis of Data: Claim/Evidence/Reasoning" meaning the Claims are all about the data, not about the entire experiment.

For example, a Statistics claim may be as simple as "Our calculations for best fit/mean/median/mode are accurate." The Statistics evidence may be, "We conducted a total of 10 trials." The Statistics reasoning may be "A sufficiently large number of trials is required for statistical calculations to be considered accurate."
Outliers claim: “There are no outliers in our data.” Outliers evidence: citing the Q1/Median/Q3 data, Outliers reasoning: explain the 1.5 IQR rule.
Data Trend claim: “We would expect X to continue to increase as Y continues to increase” Data Trend evidence: cite some numbers from the data, Data Trend reasoning: “Our data suggests a direct relationship between variable X and Y”

As far as the difference between 0, 1, or 2 points is concerned, I think there’s a little subjectivity there. If I’m grading this section, I’m giving 0 to a blank response or something unrelated to the data, 1 point to a logical statement about the data, and 2 points for being both clear and complete.
I hope this helps, and if someone more experienced notices something wrong, please point it out. The scoring rubric explanation on soinc.org is currently outdated. I wish there was a “sample” lab report provided, in the 2020 rules format.
Thank you so much for this explanation! I'm following your reasoning when it comes to grading this particular section as I do agree when it comes to giving either something unrelated or logical; and I do agree - this section usually makes or breaks a team in general on the rubric (well, in my opinion, the entire Part 2 does) but your explanation was really helpful and I seem to have a better understanding of what to do.

Also, I appreciate the quick response! I'm a judge for an upcoming Invitationals so I'll definitely follow this logic when grading. Appreciate it, thank you!