# What is a “RIT” Score?

When a student completes an NWEA MAP assessment, he or she receives a series of RIT scores as a result. So, what is a “RIT” and what do the scores mean?

“RIT” is an abbreviation for “Rausch Unit.” The difficulty and complexity of each MAP assessment question is measured using the RIT scale. A student’s RIT score indicates the level at which the student was answering questions correctly 50% of the time.

#### Distinguishing Features of RIT Scores:

**RIT Scores Indicate a Student’s Instructional Level**- The student’s RIT score indicates the level at which the student was answering questions correctly 50% of the time. These are the skills that the student is ready to be working on in class right now. The Learning Continuum matches specific skills to RIT scores, so instruction can be planned at an appropriate level for each student.

**The RIT Scale is an Equal Interval scale**- The RIT scale in consistent, just like a ruler. One inch is always one inch, and one RIT is always one RIT. A student who grows from 165 to 170 shows the same amount of instructional growth as a student who goes from a 280 to 285 — 5 RIT points of growth.
- Because the RIT score is consistent, it can be used to accurately measure a student’s growth over a period of time.

**RIT Scores are Completely Independent of Grade Level**- There are “typical” RIT scores for each grade level, but every student is different. The RIT scale allows for students to be accurately measured regardless of their grade level.
- If a 3rd grade student earns a 210 on the Reading MAP assessment, and a 8th grader also earns a 210 on the Reading MAP assessment, these two students are at the same instructional level.

#### Common Questions:

**What RIT Scores might I see for my students?**

- As a teacher it is helpful to have a general idea of what RIT scores are typical for Math, Reading, and Language Usage for the grade level of your students.

- Keep in mind that these scores are averages. You would not want to use these numbers to set goals or expectations for your students, but they provide some perspective about how each student, or the class as a whole, is performing.

**What amount of RIT score growth is “normal”?**

- Every student is unique, but we can look at the results from NWEA’s norm study to get an idea for how much RIT growth a student might show over a year.

- Generally speaking, students starting with a lower RIT score tend to show greater amounts of growth, and students starting with a higher RIT score tend to show less growth. (The most important thing? All students can grow!)

- NWEA calculates projected growth for individual students based on their grade level and starting RIT score for each subject. These targets can be very useful for goal-setting with students. Projected growth is available on the Student Goal Setting Worksheets and on the Achievement Status and Growth Report.

Material from the Learning Continuum is provided by courtesy of NWEA and may not be republished, rewritten, or redistributed. All rights reserved. Material from DesCartes: A Continuum of Learning is provided by courtesy of NWEA and may not be republished, rewritten, or redistributed. All rights reserved.

Sr. Lynda Snyder says

I appreciate the clarity and brevity of this description of the RIT score.

wuestion@google.com says

What is the sample for the norms? Nationwide? Same question for the percentile rankings.

For the Teachers says

The sample for each grade level norm is based on 72,000-153,000 students from a pool of 10.2 million students in 49 states. The norms and more info about how they were calculated is available here: https://www.nwea.org/content/uploads/2015/08/2015-MAP-Normative-Data-NOV15.pdf

FAQs with additional information and which also address the percentiles are available here: https://www.nwea.org/content/uploads/2015/12/2015-MAP-Norms-FAQ-NOV15.pdf

Melanie Gould says

So, we would not use the norm chart for, say, a fifth grade class, and expect that all students meet the 10 point average growth for math 2015 norms, Is that correct? Typically, if a student starts well below grade level mean in the fall- perhaps we will see that growth and more. Maybe. But, what about kids scoring in the 222 or 226 range or even greater? Would we still use 5th grade normative data to measure growth?

For the Teachers says

There are separate norms for growth that you can use. Each student will have their own amount of projected growth based on their fall RIT score. (The projected growth is based on the norms from other students at the same grade level, testing in the same subject and season and with the same starting RIT score.) You can see projected growth for each student on the Achievement Status and Growth Report and on the Student Profile Report. Particularly in Reading and Language, a 5th grader starting at 226 or higher will likely have a smaller growth projection, and a student starting well below grade level mean will have a larger growth projection.

The 10 points of growth we see on the norm chart is an average, so not something we’d expect every student to meet – some will be above and some below, and that’s okay.

With my students, I had some of my students who started with lower scores grow 15-20 points from fall to spring, and they were so excited (as was I!) And I had some students who started with much higher scores who grew 2-3 points, which was their projected growth. They were all worth celebrating! 🙂

j says

“a 5th grader starting at 226 or higher will likely have a smaller growth projection, and a student starting well below grade level mean will have a larger growth projection”

I accept and agree with the idea that each child will be on their own trajectory. I guess that’s why I have a problem with the statement quoted above. I’ve heard similar things from teachers when discussing all of my children (all identified as advanced learners). The implications behind the thinking is that the student has leveled out as they near the upper end of the grade-level range. (If you’re already in the 99 percentile, where can you go?)

I’ve always tried to support the notion that one nice thing about RITs is that they are independent of grade level (as you point out). So we should take the focus off the “norm” for the grade level and instead focus on action steps to get the student to the next level (whatever that is). This should counter the ceiling effect and introduce more equity into the evaluation process because effort put into each student is the same. And I believe this to be true of both ends of the spectrum. If we start to look at students below the norm in relation to a goal that is “the next step” rather than “at grade level,” we break the process into achievable, incremental steps. My fear is that by using the grade-level norm approach, we inadvertently create the ceiling effect by limiting our expectations to what the student should be doing rather than opening them up to what the student can do.

Thoughts?

For the Teachers says

I try to use the grade level norms as perspective – so I quickly know where each of my students is along the continuum – and not as an expectation because you’re absolutely right: using the norm as an expectation can be very limiting. We want the emphasis to be that every student can grow and make continual progress. The statement about the differences in growth projections you mention is based on what we see in the norms for Reading and Language – how students actually performed. We don’t see this pattern as much in the Math norms, however. In Math in the middle grades in particular we’re more likely to see the same or very similar growth projections for all of the students – again, simply because this is how other students actually performed.

Regarding the ceiling effect, I tend to think of it like this: With most skills, when you’re just starting off you are often able to learn a lot of basic skills very quickly. For example, if you’re learning to knit, you might learn how to hold the needles and cast on and off and form a row of stitches all within a single lesson – a large amount of growth quickly. But when you get into higher skill levels, like being able to knit a complex pattern, it will often take a lot longer to learn. It’s not that there’s a ceiling, it’s just that more complex skills are achieved more slowly.

In education, we see, for example, first graders learning to read, and the changes in their skill level over that year are palpable. They learn SO much and their reading ability often improves leaps and bounds. I taught seventh grade reading. Like the first graders, we worked on reading skills all year, but we would focus on skills like fact vs. opinion and plots and themes all year long. My students’ learning wasn’t as obvious as the first graders’ and their growth typically not as high simply because they were working on more intricate, complex skills that took more time to master.

James says

Well said!

Jennifer Hornkohl says

On a the Goal Setting Report, why would a student’s projected RIT be less than a RIT score that was already achieved?

For the Teachers says

Usually when I see that it’s because of the testing seasons the projected RIT is based on. For example, if – based on the fall score – the student’s projected RIT for spring was 210, and then he got a 212 on his winter assessment, if the report is still showing fall-to-spring projection, the 210 will still be showing as projected growth.

Jennifer says

For example, I have a student who scored a 202 in the fall, a 225 in the winter, and NWEA is projecting a 210 for the spring.

Jennifer says

By the way, this report is a Fall ’16 to Spring ’16 report.

For the Teachers says

That makes sense. If you run the report for Winter to Spring you’ll get a growth projection based on his winter score.

Mom says

Jennifer,

Based on a very limited sample, I have seen the projected RIT score decrease rather than increase after a substantial gain in winter testing.

I suspect that the projection algorithm is inadequate.

For the Teachers says

Interesting that you’d come to that conclusion based on a small sample. You may be interested in learning more about how the growth norms were developed. An overview is available on the NWEA website here: https://www.nwea.org/content/uploads/2017/05/MAP-Growth-Normative-Data-201706-1.pdf

Rachel says

I was wondering what you thoughts were on ” negative” growth? My 4 th grade daughter just took the NWEA tests, and went down a few points in reading and language. She had scored 227 in the Winter on Language and is now at 226 and scored 233 in Reading for Winter and is now 231. Her teacher was not pleased with her score going down, but I feel my daughter has been extremely bored this year and hasn’t been challenged enough. Could that be why her scores have not changed?

For the Teachers says

My daughter’s score dropped last year too. There are a lot of reasons why that might happen. The first thing I always look at is how much time the student took to take the test compared to the fall. The most common reason for a score to drop – in my experience, at least – is if a student is rushing, distracted, or simply not trying very hard. In my daughter’s case, the length of time on the test was about the same fall and spring, so we started looking at other possibilities: Had she been feeling well that day? Did we remember her being overtired or having something else on her mind? In our case, we really don’t know why her score went down, but since it was only by a few points, and because her teacher assured me her school work was showing that she was making progress, we didn’t worry about it, and she did show growth the next time she tested.

In your daughter’s case, it seems unlikely that her scored would go down because she’s been so bored, but being bored may have impacted her motivation to try as hard on the test as she otherwise might have. You might simply ask her if she felt like she did her best. The score difference isn’t much; it’s within the standard measure of error, so I wouldn’t worry about it too much. If her score isn’t back up in the fall, that would concern me far more. The pattern of scores over time tells a more complete story than any one score by itself.

rebecca says

How do you calculate the overall score into a percentile like a 23 percent or higher is passing a grade level but how do you get 23 percent…like a 222 and a 226 is how much percent altogether

For the Teachers says

The percentiles come from the NWEA norms. The norms show us what RIT scores are typical for each subject and grade level. The percentiles show us how a student’s RIT score compares to other students in the same grade level, same test subject, and same time of year tested.

For a student’s percentile to go up, he or she would need their RIT score to grow more than the average (typical) amount.

Casey says

Is it possible for a student to meet their projected growth goal, yet drop in their percentile? If so, what would be the cause of that?

For the Teachers says

If I saw that, my assumption would be that there are some decimals involved in the calculation that I couldn’t see – that maybe they met their growth projection by a matter of tenths or hundredths of a point. If that the case, the percentage should still be very close to what it was.

Brittany says

What is the needed score to graduate 8th grade , reading and math on NWEA/MAP ?

For the Teachers says

I don’t know of any places that require a particular RIT score for graduation, but a RIT score can determine how likely a student is to pass your state assessment. That score varies by state.

Mrs. D. says

The school district I used to work at (as an assessments coordinator) used RIT scores to determine growth and proficiency. But, the school I now teach in, in another state, uses the percentile to rank proficiency.

I’m thinking this may not be a good use of percentile scores?

For the Teachers says

Percentile can be used to rank students who are in the same grade level, but I’m not sure why they’d focus on that to look at proficiency. I think using the RIT scores makes a lot more sense. However, it’s hard to really know without having more information about their intent.

Jay Smith says

At the school where I teach our principal neglects to have NWEA administered to our students at the beginning of each school year. Instead students are initially tested in December, and their scores are then compared to EOY scores from the previous grade. My colleagues and I have expressed our concerns with this pattern of testing. We would prefer for our students to be tested at the beginning of each school, mid-year, and prior to the school year’s end as most schools do. However, our principal insists on testing students her way, then engaging in very disparaging conversations with teachers when, for example, a student’s Spring 2017, EOY NWEA percentile decreases, or does not increase when compared to the results from the test administered during Winter 2018. Administration refers to this as our “MOY” NWEA testing session, but is actually a BOY test session since our students were not tested during the Fall of 2017. Since RIT scores do not correlate to a student’s grade level, in the absence of BOY testing, can comparing test scores from the previous school be considered a valid way for measuring growth, and goal setting during the present school year?

For the Teachers says

In a situation like this, I’d suggest looking at growth scores measured from spring to spring (spring of last year to spring of this year) to get a more accurate view of each student’s growth over the entire school year.

You have several options for ways to looks at growth in the NWEA reports. You can adjust the “Growth Comparison Period” on the Achievement Status and Growth Report, for example, to look only at growth from spring to spring. Looking at it this way, you can see the growth projection for each student for the coming spring and then use the December test results to see if the students are on track with their growth or if there are areas where they are needing additional support.

You can also look at the data from winter to spring so you can see what the typical growth would be for that time frame based on the norms.

Also, check on your own reports to make sure that those December tests are coming up as a “winter” test. They should, since that’s the default for testing during December. If you try to pull up data for fall, it should show that no data is found. If the December data is indeed coming up as winter data, then the reports that measure growth should be accurate.