Ohio State Developing New Assessments Available to All Providers
By Bhupi Bhasin, Tk20 President
Until recently, evidence of teacher candidate content knowledge and ability to apply it to help students learn was stuck in neutral within educator preparation. To provide such evidence, preparation providers have primarily relied on (1) licensing examinations of content knowledge; (2) college GPA in the discipline; (3) provider-developed assessments of content and content-pedagogy in coursework; and (4) provider-developed student teaching assessments which include a focus on content and content-pedagogy.
Let's look at these sources of evidence. In the past, examinations of content knowledge for licensing purposes have been routinely excoriated for being too easy. For example, critics claimed that the mathematics knowledge required on some content exams was only that of what a student would have been expected to know. Too, little to no comparability on pass rates has been the norm; states set different cut-off scores. Critics also complained that with grade inflation rampant, GPA was not a sure measure of content knowledge.
New P-12 Standards and Assessments Are Increasing Rigor in Teacher Licensing Assessments
The new P-12 common core state standards (college and career-readiness standards) now adopted by most of the states are changing the standards of evidence for teacher candidate content knowledge. In the first decade of the new century, state leaders worked together in a consortium to create rigorous new standards in English and math, and now, science. The English and math standards and the new science standards expect students to conduct high-level analysis and problem-solving. P-12 instruction, curricula, and assessment is in the midst of change in virtually all states (even those states that do not adopt broadly used college and career-readiness standards are revising their own state standards). Teachers must have deep content knowledge to be up to the task.
New P-12 assessments that align with new P-12 college and career-readiness standards are on the horizon. Two vendors have created normed assessments and are poised to administer them when states are ready. Some states are creating their own assessments that align with the standards. In addition, CAEP suggests that one day, the accrediting body could work with states to establish a common passing score on licensing exams.1 This would provide the profession with a benchmark it has yet to create, enabling reciprocity among states and easing teacher mobility and shortage issues. By inserting language pertaining to the future into current documents, CAEP is pushing the envelope and moving the conversation into new territory. Bringing up a desired future state of being is a first step toward making it a reality in any undertaking.
Changing the P-12 assessment landscape has caused an inevitable change in assessments for teacher licensing. States need to have a way of knowing that teachers are able to teach students to meet the rigorous new P-12 standards. The major vendors that create licensing exams have revised those exams to take into account the P-12 college and career-readiness standards, and these are rolling out. Erica Brownstein, Assistant Dean of Educator Preparation in the College of Education and Human Ecology at The Ohio State University, says that Ohio has a brand new test of content knowledge for teacher licensing designed to align with the P-12 standards that teachers are expected to help their students meet. With the first year of implementation, Brownstein says that pass rates range from 20 percent to 98 percent across the state. At The Ohio State University and throughout Ohio, candidates must pass the new Ohio Assessments for Educators content examination for licensing before student teaching. This way, the provider, the schools, and the public have an assurance that the candidate has sufficient content knowledge to teach the subject during student teaching. Having candidates take and pass the content exam prior to student teaching is now a common practice, unlike a few short years ago.
In previous decades, many providers did not have candidates take the content knowledge licensing exam until just before graduation. Some institutions had a significant percentage of individuals who did not pass the exam; in these cases, institutions granted a “general studies” degree to the unsuccessful teacher education candidates. Only when candidates were about to graduate did they discover they could not enter the profession for which they had spent four years preparing. That situation called into question what the candidates had learned, the effectiveness of the preparation programs, and fairness to the students who took and passed courses and expected to graduate and teach. It is one reason for today's culture of transparency, and why the accreditation body now expects providers to publish overall outcomes on licensing examinations prominently on a link from the homepage of their websites. Several years ago, NCATE instituted a requirement that 80 percent of completers must pass the licensing exam in order for the provider to gain accreditation. The requirement has helped to mitigate against accredited preparation providers graduating large numbers of candidates who are unable to pass the licensing exam.
CAEP Raises the Bar Further
CAEP has just raised the bar again to help ensure that most candidates attain sufficient content knowledge. Now, CAEP expects providers to have an 80 percent pass rate “within two administrations” of the state content examination for licensure.
Before CAEP's inclusion of the “two administrations” expectation, media reported that some candidates from an institution might take the test five or even seven times before they passed. There was no sanction from the accrediting body. This state of affairs contributed to criticism of schools of education and the educator preparation community at large for a lack of rigor. Brownstein says that CAEP's decision to require providers to have the 80 percent pass rate in two administrations is "a very positive move forward. It will initially affect a certain percentage of candidates, but in the long run it will improve curriculum and instruction." Following this CAEP decision, providers will no doubt be motivated to ensure that their candidates are able to pass the exam, and consequently will closely examine their content and content-pedagogy coursework and areas of strength and weakness.
Critics of NCATE and TEAC accreditation in past years claimed that accreditation made no difference in the quality of preparation. The new CAEP requirement does help differentiate accreditation requirements from state requirements. In order to be professionally accredited, providers must now meet a higher standard. Those providers who cannot meet the new expectation will likely first be placed in a lower accreditation category and eventually will likely be dropped from the rolls of accredited providers. On the other hand, states make no differentiation in those who pass licensing examinations the first time or on the seventh try. The accreditation requirement provides stricter accountability on the part of the accredited provider to ensure that the candidate is indeed on a path to graduate and gain a license.
Accreditation Requirements Stimulate Plethora of Provider-Developed Assessments in the Quest for Sound Evidence of Program Effectiveness
Preparation providers have spent the last two and a half decades developing their own assessments, as the field began focusing on candidate performance and effect on P-12 student learning. NCATE's 2000 ”performance-based standards” were a watershed at the time in asking providers to develop assessments that demonstrated candidate knowledge and skill. An entire industry grew up around these assessment requirements. Schools of education began to lead the way within their own institutions as the movement to chart student outcomes within higher education gained ground during the past decade. Many assessment directors in schools of education have become university-wide leaders in assessment.
In terms of homegrown assessment, the vast majority of provider-developed instruments could not stand as valid or reliable if put to the test. Brownstein says, "Having all assessments be valid and reliable from a psychometrician's standpoint, is not realistic. For a large university such as Ohio State, this would amount to hundreds of validity and reliability studies. Each institution needs the expertise, the time, and a large enough ’n’ in order to run true validity and reliability studies. Then, those assessments might never change unless one wishes to invest and run the validity and reliability studies again when the assessments change." To provide an answer, NCATE and TEAC both developed guidance to providers on reasonable ways to help ensure that assessments measure what is intended. CAEP has also issued guidance to address this issue.
Aside from the validity and reliability issue, the individually provider-developed assessments have kept institutions in their own silos. When the National Academy of Education attempted to assess the state of educator preparation in 2008, it could not complete the task; there was simply not enough comparable data, and there had been no attempt to create a national database.
Ohio State in Collaboration with Other Providers Across the State to Develop New Assessments
Brownstein reports an exciting development in the quest for reliable and valid assessments. "The most significant activity to address the CAEP requirements for validity and reliability is that we are collaborating with other higher education providers across Ohio to develop new assessments, both unit-wide instruments (a non-discipline specific student teaching assessment) and SPA-specific instruments. This type of collaboration enables us to test an instrument in a variety of contexts and with sufficient data for reliability studies. Once these are developed, we intend to publish the information and make all instruments freely available to any provider that wishes to use them as part of their accreditation process."
Brownstein continues, "Ohio State received a funding award from the state for outstanding performance for pre-service programs. As a land grant institution, we are using those funds to create instruments that other institutions are free to use; it is part of our mission to give back to the state and education community at large. Ohio State is paying for the psychometric evaluation studies; as a Research 1 institution, we have the expertise to conduct the necessary studies, and faculty experts are willing to do the work."
"By collaborating with other institutions, we gain wisdom from numerous points of view,” says Brownstein. “This 'crowdsourcing' helps ensure reliability and validity. By conducting the studies and then providing them to others, we're making it easier for preparation programs to focus on what's important: candidate learning that improves P-12 student learning. When institutions are too busy attempting to develop their own instruments and trying to conduct these studies, it takes valuable time away from the central mission of teaching and learning. Small institutions simply don't have the resources or large enough ’n’ to conduct these studies."
In addition to the non-subject specific student teaching instrument, Ohio State is adding a subject-specific student teaching addendum for math, science, and foreign language. This will help those secondary programs where the ’n’ is typically very small. Too, each provider can add whatever items they wish to the new assessments. "That way, you have the best of both worlds," says Brownstein.
Tk20 Online Assessment System is Ohio State's Data Management Tool
To manage the Ohio State educator preparation assessment system, with literally hundreds of assessments of candidate knowledge and skill, the unit uses the Tk20 online management tool. "Tk20 enables Ohio State's programs to collect, monitor, and analyze individual candidate progress," says Brownstein. Students purchase an account which enables them to complete course assignments, build portfolios, and provide information on field experiences, student teaching, and administrative internships. "We can aggregate and disaggregate data across programs and campuses to understand trends in strengths and weaknesses," she continues.
The Big Leap: A Nationally Normed Assessment of Candidate Performance Takes the Field to a New Level
Until 2010, there was no nationally normed valid examination of candidate performance which would show that teacher candidates understood the content and could apply the knowledge in ways to help P-12 students learn. What Matters Most, a 1996 seminal report of the National Commission on Teaching and America's Future, called for a performance examination for state licensing. A decade and half later, that call has come to fruition with the advent of the edTPA™—a watershed in the history of American education. The new assessment has ushered in a new era in educator preparation which will increase the focus on P-12 student learning.
The edTPA brings the field of education a nationally normed valid and reliable assessment of candidate performance in the classroom. It helps to fill the gap in evidence that has long been missing in educator preparation. Brownstein discusses the benefits of the new assessment: "The edTPA is an objective evaluation that examines candidate ability to apply knowledge of content to help students learn at higher levels required by new P-12 standards. With 15 rubrics examining candidate performance, the assessment has been the instigator for deep and rich discussions about candidate performance and curriculum alignment and priorities. Educators within and across programs and within a discipline around the state and country now talk the same language. It has built community."
Candidates submit artifacts such as lesson plans and learner work samples and respond to commentary prompts. The final edTPA portfolio is submitted to Pearson through Tk20 and scored by educators and faculty with subject-area expertise.
In addition to the key assessments discussed above, CAEP has outlined the various sources of evidence educator preparation providers may choose to use to demonstrate that a standard has been met; they are contained in Appendix D of the 2013 Accreditation Standards.
When we look back over the past two decades, we can see how far the education profession has advanced in identifying effective practices in educator preparation and in teaching, and in being able to evaluate candidate and teacher performance in order to improve it. We've come a long way in just two decades, thanks to advancing data technology, new research, and educator commitment to improve teacher preparation, teaching practice, and assessment.
The edTPA trademarks are owned by The Board of Trustees of the Leland Stanford Junior University. Use of the edTPA trademarks is permitted only pursuant to the terms of a written license agreement.1. (2013) CAEP Accreditation Standards. Council for the Accreditation of Educator Preparation. Washington, DC., p.45.↩