mirror of
https://github.com/edcommonwealth/sqm-dashboards.git
synced 2026-03-07 21:48:16 -08:00
Add benchmarks to survey and admin data items. Remove them from measures. Modify seeder
Calculate benchmarks for measures based on a weighted average of survey and admin data items Added architectural records
This commit is contained in:
parent
1a6c81e240
commit
ad03606d66
157 changed files with 2443 additions and 1932 deletions
31
doc/architectural_decision_records/1.md
Normal file
31
doc/architectural_decision_records/1.md
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
# Decision record 1
|
||||
|
||||
# Add zone boundaries to Items and change how benchmarks are calculated for Measures, and subcategories
|
||||
|
||||
## Status
|
||||
|
||||
Implemented
|
||||
|
||||
## Context
|
||||
|
||||
Story: https://www.pivotaltracker.com/n/projects/2529781/stories/179844090
|
||||
Add new zone boundaries for Survey and Admin Data items. Measure zone boundaries become a weighted average of Survey and Admin Data items.
|
||||
|
||||
At the moment the measure table is has warning, watch, growth, approval, and ideal low benchmarks seeded from the source of truth. This change means the measure table will no longer be populated with that information. Instead, student and teacher survey items and admin data items will be seeded with benchmark information. Measure.rb will instead have methods for calculating the benchmarks.
|
||||
|
||||
|
||||
## Decision
|
||||
|
||||
What is the change that we're proposing and/or doing?
|
||||
Do we move benchmarks to admin data items and survey items directly or do we only populate admin data items with benchmarks and leave benchmarks on measures the way they are?
|
||||
|
||||
I've made the decision to move the benchmarks to the item level because it places the seed information on the correct model. Now that we know benchmarks belong to items, not measures, the data in the database and the corresponding models should reflect that fact.
|
||||
|
||||
|
||||
## Consequences
|
||||
|
||||
What becomes easier or more difficult to do because of this change?
|
||||
|
||||
Also, instead of just getting the data we need from each measure directly, we must cycle through admin and survey items to calculate the benchmark.
|
||||
|
||||
This will also slow down the test suite because we must now create survey items or admin data items so the tests pass correctly.
|
||||
24
doc/architectural_decision_records/adr_template.md
Normal file
24
doc/architectural_decision_records/adr_template.md
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
# Decision record template by Michael Nygard
|
||||
|
||||
This is the template in [Documenting architecture decisions - Michael Nygard](http://thinkrelevance.com/blog/2011/11/15/documenting-architecture-decisions).
|
||||
You can use [adr-tools](https://github.com/npryce/adr-tools) for managing the ADR files.
|
||||
|
||||
In each ADR file, write these sections:
|
||||
|
||||
# Title
|
||||
|
||||
## Status
|
||||
|
||||
What is the status, such as proposed, accepted, rejected, deprecated, superseded, etc.?
|
||||
|
||||
## Context
|
||||
|
||||
What is the issue that we're seeing that is motivating this decision or change?
|
||||
|
||||
## Decision
|
||||
|
||||
What is the change that we're proposing and/or doing?
|
||||
|
||||
## Consequences
|
||||
|
||||
What becomes easier or more difficult to do because of this change?
|
||||
Loading…
Add table
Add a link
Reference in a new issue