How to Design Professional Development that Staff Want to Attend
August 5, 2019Technology in Jewish Education: Values and Accessibility
June 3, 2020by RABBI DR. JOSHUA WISE
Malcolm Gladwell, a journalist and author, has looked at data trends in innovative ways which have helped uncover many patterns in the way we think about the world around us. He points out that the intuitive human approach to data is to consider them all of roughly equal weight. The harder part is to calibrate and prioritize the data. He promotes the “ignoring of data.” By this he means that when you find yourself swimming in data, the first step is to figure out which data to ignore – the data that is not only irrelevant to your goal but is counterproductive to it. One area where this concept is being applied in the corporate world is in the hiring process. Companies are increasingly looking to implement “blind” hiring systems where information such as names are obscured from the hiring individuals.
A famous example of this approach was done in 1952 by the Boston Pops Orchestra as they auditioned for a violinist. They wanted to eliminate any biases of gender or any physical characteristics that could sway the opinions of the interviewers. Prior to interviewing the candidate, the candidate was placed behind a screen and asked to play the requested musical piece. This practice ensures that the judging of the candidate is not based on anything other than the music being played.
In schools, we often face an ocean of data, some of it helpful, some not. From standardized test scores to patterns of office referrals, how can we create situations where we obscure from view certain pieces of data? Just as the symphony interviewers filtered out the physical characteristics of the violinist, how can schools filter out distracting data in order to cultivate the best classroom environments possible? Let’s look at some scenarios.
A Tale of Two Tests
Two classes give a test. The average in one class is a 98 and the average in the second class is a 78. Which students are demonstrating a greater degree of understanding the material that was being covered? Need more information? Okay. The medians are also 98 and 78, respectively. Still more information? What additional information would be helpful? Standard deviation? Interquartile range? No and no. The problem here cannot be solved by adding more of the same type of data. The answer is to screen out this very data because it is leading us down the wrong path. The class average on the test is exactly the kind of data that can be misleading. What would it look like if we put this data beyond a figurative screen? What would happen if we didn’t put a number on the test paper?
Let’s consider a system where the teacher was to only flag answers that contained factually incorrect information or blatantly faulty logic, and then asked questions on the paper about the student’s work or made comments on whether the answer was persuasive. The student would then have the opportunity to address those critiques directly. This would allow the teacher to see the student’s work without focusing on a number. The focus then is no longer on the teacher assigning a percentage number to the students’ work, but on considering the student’s work on all levels.
Of course, people often ask how a grading system can work without numbers. Numbers are necessary and are a powerful, yet simple, way for us to compare different outputs. I am not advocating eliminating a numeric grading system. I am only suggesting delaying the assigning of numeric values to student work. Every teacher and school would need to decide for themselves at what point, and in what way, to introduce a number mark which would represent an evaluation of the student’s work.
Obscuring the misleading data – the final number on the test paper – allows us to evaluate the student’s work as a whole, much like the symphony interviewers are able to focus on the sounds of the violinist without being distracted by his/her physical appearance.
Distracting Data
There are many ways we could fall into the trap of attributing value to data that distracts us from answering the essential question of “are students learning?” Some possibilities that would lead us to focus on the wrong data could be poor penmanship and spelling errors (assuming it’s not a spelling test).
Some students have deficits in fine motor skills and therefore have poor penmanship. If a teacher has difficulty reading a student’s work, there is an increased likelihood that the teacher will be harsher when evaluating the student’s written work. Some schools are able to offer students the use of a computer so (s)he does not need to write the work by hand, but this is not always the case. Similarly, some students may have exceptionally elegant handwriting, which could also sway the teacher in the other direction. The teacher might consider this student’s work to be more accurate and worthwhile because it is clear and easy to read.
Spelling mistakes also have the ability to skew the value of a student’s answer. Some students have very poor spelling abilities, just as some students are very weak when it comes to mathematical computations, and when that student’s work is read by the teacher, the spelling mistakes will be obvious and could lead the teacher to see that answer as inferior simply by virtue of the spelling mistakes. In both of these scenarios, we need to ensure that we are doing our best to screen out these data points and not allow them to become part of the evaluation process.
What we See and Hear
Some have started to look at non-test data as a way to ignore information that may be misleading. In these measures, we need to be just as careful to screen out the misleading data. An illustration of this might be if someone was to observe two teachers with different classroom management styles. Consider the following scenarios: One, the lampooned scene of students throwing things around the room while the teacher cries out “Class! Class!” And second, where the teacher is speaking to the class from the front of the room and the entire class is sitting up straight at attention, taking copious notes. This group of students is discouraged from asking questions, or perhaps is allowed to ask questions for the last few minutes of class only.
On the surface, it appears that there is far more learning taking place in the second room than in the first, and for many years this kind of classroom management was the ideal. However, we now know, thanks to a significant body of research, that the second room may not contain real, deep, learning. The students in this room aren’t truly engaged, and absent student engagement, learning is limited. How did we make such a mistake? What were we focusing on, and not focusing on, that led us for years to that conclusion? Again, what data are we taking in from these scenes that we need to screen out?
While we might consider the possibility of screening out the visual components of these classrooms, which would allow us to not get swayed by the perfectly still, or rambunctious, students, we couldn’t very well screen out the audio as well. The sounds in one classroom are a clear indicator of chaos, and academic sounds fill the other.
Screening out the audio may in fact hold the answer, but only if we screen out just the teacher’s voice. If we did that, what would we hear in the second classroom? Silence. We would see students sitting obediently, but we wouldn’t hear anything at all – academic or otherwise. When looking for active student learning, we need to keep in mind that whoever is doing the talking, is doing the learning. And so, the model that has led us for so many years to think that the second classroom is learning can be dismantled.
The research has shown that what is critical is student engagement. While the students in the first classroom are having the equivalent of recess, the students in the second classroom are not actually engaged in learning either. They are listening to a lecture and acting as court stenographers. The difference between the classrooms is clearly the decorum, but to fully realize that both classrooms are bad, and they are, we need to look at what they have in common – in each classroom, the students are not active participants in the learning. Perhaps we can observe that the teacher in the first classroom is inviting the students into the active learning role – ineffectively, while the second teacher is not seeking that from the students at all.
Office Referrals
In a similar way, we can look at the rates at which teachers send students to the office for a discipline referral. We might be justified in thinking that a teacher who often sends students to the office is not being successful in the classroom. However, a teacher who never sends students to the office may be doing so for the wrong reasons. It may be that the class is a chill class where the expectations on the students are minimal and the teacher has a great relationship with the students, which often leads the students to behave properly. It may also be that the teacher prizes above all else the relationship, in which case, the teacher would not refer a student out for fear of undermining that relationship. Again, by focusing on the wrong data, we fail to make an accurate evaluation of whether the students are really learning or not.
Signal or Noise?
Let me be clear – data is great. In the data age, we are able to collect information in ways we never thought before. The problem is only when we let misleading data do the loudest talking. When we let the class average on a test become the most important piece of data, we get lost. It’s widely recognized that when standardized test data alone are used to determine the most effective teachers, we end up with people gaming the system, sometimes in ways that are unethical or even illegal. Therefore, we must use this type of data – standardized test scores or in-house test data – as one part of the picture that, together with other data, such as evidence of student engagement, we use to gauge learning and success.
In the data age, our task is to separate the signal from the noise. When we encounter a situation, our brains immediately get to work in determining, subconsciously, what it thinks is signal and what is noise. Sometimes our brains have trouble distinguishing between the two. When that happens, we must stop and restart this analysis just as the symphony conductors realized that they were closing the door on talented musicians who didn’t fit the “requirements.”
Rabbi Dr. Joshua Wise is assistant principal in the Middle School at the Magen David Yeshiva Elementary School in Brooklyn. He is also the cofounder of www.parshaninja.com. He can be reached at [email protected].