Scroll To Top

agpa k-12 outreach banner

Professional Development Modules

Evaluation of Professional Development

Introduction

The National Science Education Standards (NRC, 1996) contain a chapter devoted to Standards for Professional Development for Teachers of Science, whose purpose is to "provide criteria for making judgments about the quality of the professional development opportunities that teachers of science will need in order to implement the National Science Education Standards" (p. 55). Using these standards as the criteria should give districts/schools all the tools they require for the evaluation of the professional development of their teachers.

Assessment of Teachers in Professional Development Activities

Within each of the modules on this website suggestions for evaluating teachers' learning are included. The assessments we suggest are specific to each module, but in general, fall into one of several categories:

  • assessment of content knowledge acquisition - using checklists, self-reporting essays, questions focused on specific content, analysis of teachers' oral reports, etc
  • assessment of acquisition of skills - monitoring teachers' activities within the professional development session, observations of teachers' use of skills within their own classrooms, etc
  • assessment of teachers' attitudes toward science and science teaching - using checklists, pre and post interviews, journal-type writings, etc
  • assessment of classroom implementation - begins with guiding participants to write a plan for implementation (each module contains a template), checking this plan (product evaluation) for faithfulness to the goals of the professional development session, and followed up with a visitation to the teachers' classrooms to include time for reflection and dialogue

Each section of the learning cycle procedures includes a suggestion for assessment (please see the essay on the learning cycle for more details).

In addition, the professional development provider may wish to use one or more quantitative measures within the context of a coordinated program of professional development activities. The instruments provided below are only two of the many that can be found in the education literature. References to these instruments is provided; each has met validity and reliability requirements for a sound instrument; each could be used as a pretest to gather baseline data and then as a posttest to look for change as a result of the professional development program intervention.

The Shrigley-Johnson Science Attitude Scale (Shrigley & Johnson, 1974) provides information on the science attitudes of inservice teachers using a Likert-type scale. Analysis of teachers' scores could be sorted by the gender, age, teaching experience, license area, or any choice of variable. Two dimensions of teachers' self-efficacy (outcome expectancy beliefs and personal self-efficacy beliefs) are measured by using the Science Teaching Efficacy Beliefs Instrument (STEBI) (Riggs & Enochs, 1990). If teachers score low using this measure it might mean that they believe that they cannot effectively teach science or that, even if they teach effectively, their students cannot learn science. Ascertaining teachers' beliefs helps predict their science teaching behaviors and is useful as a first step in the change process.

Evaluating the professional development experience

In this section we will give some specific suggestions that professional development providers might use to evaluate the effectiveness of the session they have conducted using one or more of the modules found on this website. The publications (*) referenced in the bibliography (only a starting place) give much more information and should probably be acquired for your personal library.

In their seminal work, Designing professional development for teachers of science and mathematics, Loucks-Horsley, Hewson, Love, and Stiles (1998) suggest several questions that professional development providers should ask themselves and proceed to elaborate on these questions and to give helpful tips in answering them.

These questions include:

  • What are their desired goals/outcomes?
  • What outcomes should be assessed and why?
  • How can these outcomes be measured?
  • How can the results of these evaluations lead to continuous improvement?

Helpful tips to answering these questions include:

  • Focus on outcomes rather than activities when planning your professional development interventions. Activities are easier to assess, but outcomes lead to a program that is more focused and purposeful
  • Use multiple data sources for assessment in order to evaluate your program. Seek information from participants, colleagues, administrators, and others who may add insight into the success of the learning experience. Use multiple data types (interviews, product analysis, performance tasks, focus groups, etc)
  • Horizon Research, Inc. (1997) has developed many high-quality evaluation instruments that may be useful
  • While the ultimate goal of teachers' professional development is to impact students' learning of science, there are many sub-goals to be addressed as well. Be sure to think about the relevant sub-goals and build your assessment plan to address these sub-goals sequentially, building toward the achievement of your ultimate goal
  • Give participants a share in specifying and discussing the desired outcomes of the professional development program. Such engagement promotes reflection, self-evaluation, personal goal setting, and ownership - all desirable outcomes on their own

Guskey (2000) describes five sequential levels to be addressed when evaluating professional development. This short essay will only give brief descriptions of these levels. We encourage the reader to study Guskey's model carefully and design the local plan around this inclusive, logical framework.

Level One: Participants' Reactions

This level is the first, most common, and easiest to evaluate. Questions guide the evaluation and these questions cover several areas.

  • Content questions - Did the content make sense? Was your time well spent? Will what you learned be useful to you?
  • Process questions - Was the leader knowledgeable and helpful? Were the goals clearly specified? Was sufficient time provided for the task?
  • Context questions - Were the facilities conducive to learning? Were the chairs comfortable?

Collecting answers to these questions often requires a written evaluation form or survey completed at the end of the session. Based on your goals it might be useful to distribute such forms to participants several days after the event, thus allowing participants time for reflection on the experience and its relevance. Guskey gives several examples of forms (pp. 108 - 114) that might be used or amended to measure level one. It is important to consider how the information gathered at this level will be used. Typically, results of these evaluations are used to affect improvements in subsequent professional development sessions.

Level Two: Participants' Learning

Did the participants acquire the intended goals? This is the key question to address at this level. Remember: there may be various types of goals to be measured - cognitive, affective, psychomotor - the science professional development session should address all three of these areas. There are helpful forms (pp. 128 - 130) for gathering written feedback provided in the book. Other forms of evaluation include interviews, participants' learning logs, participants' reflective journals, pretests, and posttests. The information gathered should guide improvements in the format, content, and organization of future sessions.

Level Three: Organization Support and Change

Organizational factors at the district/school levels can "make or break" the professional development experiences. The values, beliefs, and norms of the organization (culture) need to be recognized when changes advocated in the professional development goals are pursued. Therefore, we should look at variables related to support for change. Among these variables are organization policies, resources, collegial support, the principal's support, administrators' leadership, provision of time, and recognition of success. Tools that may be useful in collecting data to measure the effect of these variables include direct observations, focus groups, questionnaires, structured interviews, and reflective journals. Results of this evaluation should inform all aspects of organizational support.

Level Four: Participants' Use of New Knowledge and Skills

Did participants put into practice their knowledge and skills acquired in the professional development program? At what level of use did the implementation occur? Are the practices observed/reported really new or different from participants' prior behaviors? Gathering data to answer these questions may involve direct observation; interviews with the participant, his/her supervisor, his/her students; focus groups; reflective journals; and, participants' portfolios. Analysis of data provides evidence on current levels of goal-attainment and can help restructure future activities.

Level Five: Student Learning Outcomes

The standards movement and the focus on accountability have put emphasis on students' learning outcomes as a consequence of teachers' professional development experiences. Like teachers' learning outcomes, three domains (cognitive, affective, and psychomotor) should be addressed when measuring students' learning outcomes. Standardized performance assessments, teacher-developed classroom assessments students' portfolios, questionnaires, and school records are vehicles for data collection. Qualitative measures are useful in measuring affective learning outcomes. Guskey cautions that multiple measures of student learning be used in professional development evaluations.

Guskey closes his book with suggestions for presenting evaluation results, including a helpful framework on p. 259.

Bibliography

*Guskey, T. R. (2000). Evaluating professional development. Thousand Oaks, CA: Corwin Press.

Horizon Research, Inc. (1997). 1997 local systemic change: Core evaluation data collection manual. Chapel Hill, NC: Author.

*Loucks-Horsley, S., Hewson, P. W., Love, N., & Stiles, K. E. (1998). Designing professional development for teachers of science and mathematics. Thousand Oaks, CA: Corwin Press.

National Research Council. (1996). National science education standards. Washington, DC: National Academy Press.

Riggs, I. M. & Enochs, L. G. (1990). Toward the development of an elementary teacher's science teaching efficacy belief instrument. Science Education 74, 625-637.

Shrigley, R. L., & Johnson, T. M. (1974). The attitude of in-service elementary teachers towards science. School Science and Mathematics 74, 437-446.