Strands D.1 and D.2 in MYP Design

Managing all sixteen strands of the design cycle can be challenging for both new and experienced MYP Design teachers. Evaluating the effectiveness of students’ design solutions is a critical aspect of Criterion D. Strands D.1 and D.2 in MYP Design play a vital role at the end of the design cycle. They provide students with a roadmap for testing and analyzing the success of a design solution.

MYP Design Criterion D Evaluating Strand D.1 and Strand D.2
MYP Design Criterion D Evaluating – Strand D.1 & Strand D.2

Strand D.1, the testing plan, should describe the methods for testing based on the design specifications created in Criterion B. This plan involves evaluating specifications through internal testing, interviewing clients and users, observing users interacting with the product, and more. Strand D.2 summarizes the test data and determines the product’s success, which then informs ideas for improvement in Strand D.3.

If you want to help your students create successful solutions in MYP Design, it’s helpful to understand the significance of Strands D.1 and D.2. By learning how to use them effectively in your classroom; you can guide your students through the design process and help them succeed. Your students will better understand the importance of testing and analyzing their designs, making them better designers.

Generating Hypotheses

Generating hypotheses before testing is valuable for engaging students in an MYP unit that spans multiple weeks. It may seem like an extra thing to do, but it can lead to increased motivation as you begin to close out Criterion D.

Quantifiable hypotheses are easier to collect, summarize, and contrast with the final testing results. Differences in what students predict regarding testing outcomes and actual results can foster discussions leading to deeper engagement. Sharing these differences as part of the problem research in Criterion A may also build better buy-in during Criterion D.

By actively participating in the inquiry process, students also develop a deeper understanding of the concepts studied as they connect theoretical knowledge to real-world scenarios.

Design Testing Methods – Strand D.1

Strand D.1, in MYP Design, focuses on testing methods to measure the success of a student’s design solution. The MYP Design Guide states that Year 1 students should outline simple and relevant testing methods to generate data for the highest achievement level. By Year 5, to achieve the top marks, the student should describe “detailed and relevant testing methods, which generate accurate data, to measure the success of the solution.”

There are five classifications of testing methods for Criterion D:

  • expert appraisal
  • field trial
  • performance testing
  • user observation
  • user trials

For example, consider an engineering MYP Design unit, such as designing a helicopter to fall straight and slowly. Performance testing would be the most suitable test compared to other types of tests for several reasons.

MYP Design Criterion D Evaluating. Testing to Determine the Success of the Solution
MYP Design Criterion D – Testing to Determine the Success of the Solution

Performance Testing

Performance testing generates quantifiable data, which is crucial when designing a helicopter prototype that needs to stay aloft for as long as possible and descend as straight down as possible. Time and distance can easily be measured objectively to show performance metrics.

These data can be used to compare different designs and make informed decisions about which helicopter design is best suited for the goal of the unit as defined by the GRASPS. Easily measurable testing data should be a factor in organizing your MYP Design units for the year. That is, units that require performance testing that yield easily quantifiable data should start the school year.

Secondly, performance testing can create the conditions the paper helicopter may experience in flight. This setup helps to ensure that the design is appropriate for the intended use as a prototype. For instance, wind conditions could be simulated (if that were part of the GRASPS scenario) to see how the paper helicopter performs under varying wind speeds and directions.

Performance testing provides an objective evaluation of the design. Subjective opinions or biases do not influence the test results but indicate how the design performs under the design specifications generated in Criterion B.

Lastly, performance testing allows for an iterative design process, where the results obtained from the test can be used to make changes and improvements to the design. I love this part about units that lend themselves to performance testing. These data are usually easy to communicate and share with future classes as part of Criterion A‘s research material.

Other MYP Design Testing Methods

Expert appraisal requires an assessment of a design by an expert in the field to provide feedback and suggestions for improvement. This requirement may seem complicated; however, parents or siblings could be experts. Some research is necessary! One way around this is to role-play as an expert. Check out my MYP Digital Design post for ideas.

Field trials need a real-world test of a design in its intended environment to authentically gather data and feedback on its performance. Beginning Year 1, designers may still need to gain the skills to create and market a product for this type of test. An MVP is the most basic version of a product that could be developed and released to a potential market, with just enough features to meet the needs of the client/audience. It can generate rich feedback to improve the product’s next iteration.

User observation involves systematically observing actual users interacting with a design to identify usability issues and potential areas for improvement. Upcycling to make a perfect gift and gathering user satisfaction over time requires user observation.

Finally, user trials consist of a controlled test (or tests) of a design with actual users to gather data and feedback on its usability, effectiveness, and user satisfaction.

Evaluate the Success of the Solution – Strand D.2

Strand D.2 in Criterion D MYP Design requires students to critically evaluate the success of their solution based on authentic product testing. This evaluation involves testing the product against each design specification established in Criterion B.

This task is easier said than done! A final highlight of the unit tends to be the testing–putting the solution in play to see how well the problem was solved. What if the design has many design specifications? Objectively evaluating each could be unrealistic. Also, you’re deep into the design cycle at this point in Criterion D. Student stamina for learning might be diminishing.

Methods of Evaluation

One method to evaluate the success of the solution is to have students select preset written descriptors to categorize each specification based on their best perspective on how well it was met. Clear prewritten statements to choose from are also beneficial for English Language Language Learners (ELLs). Here are examples from engineering a paper water tank:

  • Exact – Your team’s tank exactly met the design specification.
  • Close – Your team’s tank mostly met the design specification.
  • Middle – Your team’s tank met some of the design specification.
  • Far – Your team’s tank met none or very little of the design specification.
  • NA – You are unsure how your team’s tank met the design specification.

Ranking design specifications using a numeric scale can sustain student engagement, connect learning to Criterion B’s design specifications, and honor the design process without overdoing it.

For example, when upcycling plastics to create a gift, students can use a ranking scale (5 = met perfectly, 1 = did not meet at all) or a short written comment to evaluate their product against each specification. The goal/problem introduced in the GRASPS should be a focus among the design specifications and may warrant a written justification.

Data collected from the evaluation of the solution against the design specifications should be archived. They can be used as credible and authoritative research material for future classes. The data can be used in Strand A.2 to identify and prioritize research.

Fail Forward

Testing data of a solution, even if it shows failure according to design specifications, can serve as student exemplars for future classes. Student-generated results data offer authentic insights into the design process and potential areas of improvement. Analyzing successes and failures can lead to the development of new design criteria, which can improve the product. In addition, when we analyze both successes and failures, it can provide future students with a better understanding of the design process and help them avoid making the same mistakes.

Strands D.1 and D.2 in MYP Design Summary

Criterion D is a fundamental aspect of the MYP Design Cycle. Within this criterion, Strands D.1 and D.2 are crucial for evaluating the effectiveness of a student’s design solution. Strand D.1 outlines the methods for testing the design solution. This strand should include evaluating a solution’s functional requirements through the testing methods referenced in the MYP Design Guide. On the other hand, Strand D.2 focuses on summarizing the data gathered through testing and using it to determine the product’s overall success.

Understanding these strands is crucial to guide students in creating successful design solutions. Managing all 16 strands of the design cycle can be challenging! Strands D.1 and D.2 in MYP Design provide a blueprint to students for evaluating the effectiveness of a design solution. Adding depth to these strands with hypotheses and connections to other criteria enhances each student’s MYP Design experience.