How to Design Professional Development that Staff Want to Attend
August 5, 2019Technology in Jewish Education: Values and Accessibility
June 3, 2020by RABBI ELIEZER Y. LEHRER, M.ED.
We had a problem. The Geometry Regents examination was in two weeks and the girls were ill-prepared. Their recently returned marks on practice exams were in the 50s and 60s. We knew they were grossly underperforming and were sure it was due to lack of confidence and insufficient study. We had been looking locally for help for years without any significant leads. What were we to do?
Expanded regional networking yielded an exceptional math teacher in New Jersey who was comfortable with video conferencing. The practice exams were scanned, emailed and reviewed. Two sessions were arranged, and the next Sunday found the girls enthusiastically crowded around a single computer in the office. What unfolded was magic.
Separated by a distance of over three hundred and fifty miles, this teacher clearly and methodically proceeded through an intense overview of the Geometry curriculum, focusing on areas of weakness. She engaged with each student, constantly checking for understanding.
The girls sat for the exam the next week with dramatic results. The smallest increase was a twenty-five-point gain!
When the administration met for its annual End-of-Year meeting, a simple question was raised: How do we explain what happened?
In the ensuing discussion, a few what-ifs were analyzed:
- What if we had not kept data on the girls’ academic performance all year?
- What if we had not administered practice regents under test conditions?
- What if they had not been fully marked and reviewed?
- What if we had not compared the new data with the prior data?
- And most relevant for our planning for the upcoming academic year, what if the girls would have had this teacher for an entire year?
Originally the data collection was to chart the girls’ academic performance and achievement. Now we wanted to know if we could use the same information for a new purpose. The teacher was undoubtedly an excellent teacher. She knew the material, presented it clearly, checked for understanding, and the students responded to her well. What we were not certain was whether hiring her as a full-time long-distance teacher would be a good long-term solution. We had not intended to be considering investing significant human and financial resources at the time. Would there be issues regarding classroom management, availability of sufficient technological resources, and additional allocation of funds?
This question of whether the assessment conducted was good, was obviously not a matter of not having conducted a constructive long-term assessment, nor for a lack of data. It had been good for our original objectives. But was it good for our new objectives? Our situation underscored the need for us to assure ourselves that the previously conducted assessment would achieve our current objectives. In short, we needed to “assess the assessment.”
How to Achieve Effective Assessments
An assessment may be valid in and of itself as a means to cull data, glean information and be used to help make future decisions. However, to be effective, it needs to achieve one’s objectives. In other circumstances, as is in most cases, effectiveness of an assessment needs to be determined ahead of time.
How does one go about assuring the effectiveness of the process of assessment and data collection?
In general, assessments must address four key elements to assure their effectiveness:
- Define success in advance
- Make collection easy, avoid information overload
- Keep an eye on the prize
- Analyze the analysis
Define Success in Advance
There is a well-known adage, “If you want to go nowhere, all roads will take you there.”
It is possible that a school may be quite adept at data collection, have experts on the team that will analyze the data well and proscribe a definitive course of action based on the data, yet will still be terrible at effectively using that data. This occurs when the team does not clearly define – in advance – what they consider success. The entire process may be derailed, and no one would be the wiser.
To avoid looking at a comprehensive assessment months or even years down the road and be way off target, it is paramount at the outset that an assessment contains clearly defined objectives and specific targets; what many refer to as “success criteria.”
Success criteria are the well-defined, measurable terms of what the outcome should be, along with the core elements along the way that are acceptable to all stakeholders.
Karl E. Wiegers, PhD, spent 18 years at Eastman Kodak as a research scientist, software developer, software manager, and software process and quality improvement leader prior to starting Process Impact in 1997. In his February 2002 article, Success Criteria Breed Success (https://www.ibm.com/developerworks/rational/library/2950.html), Dr. Wiegers states,
Defining explicit success criteria during the project’s Inception Phase keeps stakeholders focused on shared objectives and establishes targets for evaluating progress. For initiatives that involve multiple subprojects, success criteria help align each subproject with the big picture. In contrast, ill-defined, unrealistic, or poorly communicated success criteria can lead to disappointing business outcomes. Such vague objectives as “world class” or “killer app” are meaningless unless you can measure whether you’ve achieved them.
Dr. Wiegers might as well have been speaking about student achievement instead of software development; school assessments instead of project phases. The principle is the same: To avoid undesirable outcomes, at the onset assessments must clearly define success by addressing, “What is our objective and how will we get there?”
Make the Process Easy – Avoid Information Overload
What good is data collection if the process is overly complicated? How useful is an assessment if those required to record their information suffer from information overload and fail to properly fill out the forms, if at all? How cumbersome is it to need to sift through scores of data unnecessarily?
A principal may cajole or threaten, may even provide significant incentive, yet to maintain buy in, positive morale, and ensure accuracy, the data collection needs to be easy. Furthermore, the administration and faculty may be able to interpret the data, but will other stakeholders?
Many schools have switched to an online system for recording student grades. Teachers input grades for each subject and all its sub-categories. Due to the volume of information mandated, schools need to confirm teachers have properly inputted the data. The ensuing benefit is a comprehensive overview of student achievement with the ability to maintain an online record for each student and class. The challenge is when one looks at the resulting printout of the Report Card. Professionals and parents receiving the data may be inundated with information. Those with any sort of sensory challenge may have a difficult time appreciating and processing all the data. [Parenthetically, a solution that would satisfy both would be to have a graphic designer design a more readable, eye-pleasing report card.]
To ensure the effective collection and interpretation of data, keep it simple – make the process easy and avoid information overload.
Eye on the Prize
It is challenging to stay the course. It is easy to get distracted. How many assessments have been derailed due to external and internal factors that divert focus?
A number of years ago we received student and parental complaints that there was too much homework being assigned. We asked the students to keep a record of when they began and ended doing their homework each evening for two weeks. We also requested that they note if they had been interrupted at any point and for how long. The students understood that we were committed to revisiting our homework policy and easing the load based upon the results. They also understood that inaccurate or under-reporting would impair the outcomes.
Despite defining success at the onset, having easy data collection through a simple form, and apparent motivation to stay the course, nevertheless the assessment seemed to be compromised. Only a few students actually filled out the forms for the full two weeks.
But by keeping our eye on the prize, namely the objective of determining whether we needed to change our homework policy and appreciating that we were dealing with teenagers, we were able to salvage the assessment. We interviewed the students while their memory of the prior two weeks was relatively fresh in their minds and used the addition of that data to make our determination to reign in the volume of material assigned each day.
Effective assessments vigorously maintain focus on the “who, what, why we are assessing,” and are flexible when necessary.
Analyze the Analysis
After the data is in, effective assessments end with the simple question, did we get what we were looking for?
At the conclusion of many professional development workshops and conferences, the presenters hand out surveys. Often, they ask for key takeaways and whether the conference met the attendees’ goals. They are not just doing it for the sake of the participants. Rather, they are determining if they accomplished what they had set out to accomplish.
To avoid data going to waste or being misrepresented or misused, it is paramount to ascertain that the appropriate targets were reached, the intended goals met, and objectives achieved. Comparing the data culled with the original success criteria ensures that the intended outcomes are incorporated in the conclusions drawn and future plans.
In Summary
We acted as if we were initiating the process and discussion regarding how to best address our upcoming year’s math needs. Our objective was clearly identified – provide our students with the best math teacher available and within budget who would teach and motivate them to realize their potential in the NYS Regents Math curriculum. Success was defined as the course we would follow to ascertain the traits, skills, temperament a “best teacher” would encompass, along with our interview and hiring process. As opposed to a typical model lesson, the review sessions had been more comprehensive and offered greater insight as to future outcomes.
When reviewing the data, it was easy to match our conclusions against our success criteria. We appreciated that this teacher met our criteria, having maintained focus on “what, why, and who” we were assessing. Subsequent to procuring the requisite funds and equipment, we hired the math teacher from New Jersey and successfully opened our first long distance classroom the following September.
Schools must maintain myriad amounts of records. To improve and ensure they are meeting their students’ needs, various means of assessment are constantly being conducted. By defining success in advance, making data collection easy, avoiding information overload, keeping an eye on the prize, and analyzing the analysis, school administrators can feel confident that their assessments are effective.
Rabbi Eliezer Y. Lehrer, M.Ed., serves as Headmaster of Ora Academy, an all-girls high school located in Rochester, New York, and is the creator and presenter of The Luzzato ApproachTM. Rabbi Lehrer may be reached at elehrer@oraacademy.org.