Game Design

🎲 Game Design: Unfair Dice & Hypothesis Testing

πŸ“Œ Overview

Hypothesis testing is essential for scientific inquiry, helping us validate or refute assumptions using empirical data. In this game, teams will explore probability and randomness by designing and conducting experiments with unfair dice to evaluate a given hypothesis.

Through this process, teams will engage in the scientific method, using controlled experimentation, data analysis, and reflection to determine whether the given hypothesis is supported by the evidence. This hands-on approach reinforces critical thinking, data literacy, and statistical reasoningβ€”key components of STEM learning.


🎯 Game Theme: Unfair Dice & Hypothesis Testing

Probability plays a crucial role in decision-making, particularly when faced with biased or unpredictable outcomes. Unfair dice have skewed probabilities, meaning some sides appear more often than expected in a fair roll.

In this event, teams will:
βœ… Apply hypothesis testing to analyze unfair dice.
βœ… Design controlled experiments to collect meaningful data.
βœ… Use statistical reasoning to interpret results and evaluate claims.
βœ… Communicate findings effectively through a structured written report.


πŸ“‹ Event Details

  • 2 team members will compete in this event.
  • At the start of the competition, teams will be given a hypothesis about a set of unfair dice.
  • Teams will have 30 minutes to:
    • Design and conduct experiments to test their hypothesis.
    • Analyze their results and determine whether the data supports or refutes the hypothesis.
    • Document their process and findings in a guided worksheet for judging.
  • Judging will be based on the written summary of findings.

πŸ”Ž Key Components of the Experiment

πŸ”Ή Hypothesis Evaluation – Teams do not create their own hypothesis; instead, they test a given hypothesis using controlled experimentation.

πŸ”Ή Experimental Design – Teams must consider:

  • Sample size: How many rolls are needed for reliable results?
  • Data collection: How will outcomes be recorded systematically?
  • Randomization: How will bias in rolling be minimized?

πŸ”Ή Data Analysis – Teams will compare observed results to expected probabilities, looking for statistical significance.

πŸ”Ή Reflection & Reporting – Teams will explain whether their results support or refute the hypothesis, discuss possible errors or confounding factors, and suggest improvements to their approach.


πŸ† Evaluation Criteria

Teams will be evaluated based on their written reports, considering the following criteria:

1️⃣ Hypothesis Testing & Experimental Design

βœ”οΈ Does the team demonstrate a clear understanding of probability concepts?
βœ”οΈ Is the experimental setup well-structured, considering sample size, randomization, and control?

2️⃣ Data Collection & Documentation

βœ”οΈ Is data accurately recorded and presented clearly?
βœ”οΈ Does the documentation support meaningful analysis and interpretation?

3️⃣ Data Interpretation & Analysis

βœ”οΈ Are the results analyzed correctly, with logical reasoning?
βœ”οΈ Does the team identify trends and patterns in the data?
βœ”οΈ Are conclusions supported by evidence?

4️⃣ Reflection & Scientific Reasoning

βœ”οΈ Does the team explain unexpected results or inconsistencies?
βœ”οΈ Is the final conclusion justified based on the collected data?
βœ”οΈ Does the team discuss how they would refine the experiment for future testing?


πŸ“ Rubric & Judging Guidelines

Criteria Exemplary (9-10 points) Proficient (5-8 points) Limited (0-4 points)
Hypothesis Testing & Experimental Design Experimental approach is well-developed, considering sample size, randomization, and control to ensure reliable results. Shows strong understanding of probability. Experimental strategy is somewhat developed but may overlook some important factors. Some understanding of probability is evident. Experimental design is poorly structured, missing key elements like randomization or control. Little understanding of probability is demonstrated.
Data Collection & Documentation Data collection is thorough, with clear documentation and a well-organized report. Data collection is partially complete, with some inconsistencies in recording. Data collection is incomplete or poorly documented, making it difficult to analyze.
Data Interpretation & Analysis Interpretation is logical and well-supported by data. Identifies key patterns and correctly applies statistical reasoning. Interpretation is somewhat logical, but may contain errors in statistical reasoning or overlook key insights. Interpretation is flawed or lacks depth, with little connection to the data.
Reflection & Scientific Reasoning Reflection is insightful, explaining unexpected results and discussing future refinements. Conclusion is strongly supported by evidence. Reflection touches on some insights but lacks depth. Conclusion is reasonable but not fully supported. Reflection is minimal or absent, failing to consider key takeaways or errors.

Final Score = Sum of all four criteria (Max: 40 points)


πŸ” Example Hypothesis & Experiment Outline

To illustrate the experiment, consider the following example:

Hypothesis:
β€œThis set of unfair dice will land on even numbers 60% of the time.”

Steps to Test the Hypothesis:

  1. Determine a sample size – Teams must decide how many rolls to ensure a reliable test.
  2. Design an unbiased rolling method – Teams must consider how to eliminate accidental bias in rolling.
  3. Record outcomes systematically – Teams create a data table to track results accurately.
  4. Compare results to expected values – If even numbers appear close to 60%, the hypothesis is supported. If the observed probability is significantly different, the hypothesis is refuted.
  5. Reflect on findings – Teams discuss what worked, potential sources of error, and how to refine their process.

πŸ“œ Final Summary

βœ… Teams receive a given hypothesis to test.
βœ… Experiments must be designed to ensure fairness, accuracy, and reliability.
βœ… Data must be recorded and analyzed systematically.
βœ… Teams submit a structured written report for judging.
βœ… Winners are determined based on experimental quality, analysis, and reflection.

Good luck, and may the best experimenters win! πŸ”¬πŸŽ²πŸ“Š