# Application decision record 2 # Add scale to framework ## Status Completed ## Context The calculations needed to be reworked. When calculating a score for a measure, we had looped through all the survey_items_responses for a measure and taken the average for all of them. Doing it that way means student responses are given more weight than teacher responses since the number of students responses will outnumber teacher responses. Another consequence is that survey_items with more questions will be prioritized over ones with fewer questions since they will create more survey item responses. Story: #181205114 and #181209931 ## Decision Change the calculations so that scores bubble up through the framework hierarchy. ## Consequences Added scales to framework. Changed calculations to first group based on the most atomic bits of the framework and then average those groupings as we go up the framework. First, likert scores are averaged for all survey item responses in a survey item. Then all the survey items in a scale are averaged. Then student scales in a measure are averaged. And teacher scales in a measure are averaged. The average of student and teacher scales becomes the score for a measure. Then the measures in a subcategory are averaged.