Evaluation Essentials is an indispensable text that offers an introduction to program evaluation. Examples of program descriptions from a variety of sectors including public policy, public health, non-profit management, social work, arts management, education, international assistance, and labor illustrate the book's step-by-step approach to the process and methods of program evaluation. Perfect for students as well as new evaluators, Evaluation Essentials offers a comprehensive foundation in the core concepts, theories, and methods of program evaluation.
Beth Osborne Daponte, Ph.D., is a senior research scholar at the Institution for Social and Policy Studies and lecturer in the School of Management at Yale University. Currently, she is also working with a large community foundation, helping it address its evaluation challenges at both the organizational and programmatic levels.
Figures and Tables. Preface. Acknowledgments. The Author. ONE: INTRODUCTION. Learning Objectives. The Evaluation Framework. Summary. Key Terms. Discussion Questions. TWO: DESCRIBING THE PROGRAM. Learning Objectives. Motivations for Describing the Program. Common Mistakes Evaluators Make When Describing the Program. Conducting the Initial Informal Interviews. Pitfalls in Describing Programs. The Program Is Alive, and So Is Its Description. Program Theory. The Program Logic Model. Challenges of Programs with Multiple Sites. Program Implementation Model. Program Theory and Program Logic Model Examples. Summary. Key Terms. Discussion Questions. THREE: LAYING THE EVALUATION GROUNDWORK. Learning Objectives. Evaluation Approaches. Framing Evaluation Questions. Insincere Reasons for Evaluation. Who Will Do the Evaluation? External Evaluators. Internal Evaluators. Confi dentiality and Ownership of Evaluation Ethics. Building a Knowledge Base from Evaluations. High Stakes Testing. The Evaluation Report. Summary. Key Terms. Discussion Questions. FOUR: CAUSATION. Learning Objectives. Necessary and Suffi cient. Types of Effects. Lagged Effects. Permanency of Effects. Functional Form of Impact. Summary. Key Terms. Discussion Questions. FIVE: THE PRISMS OF VALIDITY. Learning Objectives. Statistical Conclusion Validity. Small Sample Sizes. Measurement Error. Unclear Questions. Unreliable Treatment Implementation. Fishing. Internal Validity. Threat of History. Threat of Maturation. Selection. Mortality. Testing. Statistical Regression. Instrumentation. Diffusion of Treatments. Compensatory Equalization of Treatments. Compensatory Rivalry and Resentful Demoralization. Construct Validity. Mono-Operation Bias. Mono-Method Bias. External Validity. Summary. Key Terms. Discussion Questions. SIX: ATTRIBUTING OUTCOMES TO THE PROGRAM: QUASI-EXPERIMENTAL DESIGN. Learning Objectives. Quasi-Experimental Notation. Frequently Used Designs That Do Not Show Causation. One-Group Posttest-Only. Posttest-Only with Nonequivalent Groups. Participants Pretest-Posttest. Designs That Generally Permit Causal Inferences. Untreated Control Group Design with Pretest and Posttest. Delayed Treatment Control Group. Different Samples Design. Nonequivalent Observations Drawn from One Group. Nonequivalent Groups Using Switched Measures. Cohort Designs. Time Series Designs. Archival Data. Summary. Key Terms. Discussion Questions. SEVEN: COLLECTING DATA. Learning Objectives. Informal Interviews. Focus Groups. Survey Design. Sampling. Ways to Collect Survey Data. Anonymity and Confi dentiality. Summary. Key Terms. Discussion Questions. EIGHT: CONCLUSIONS. Learning Objectives. Using Evaluation Tools to Develop Grant Proposals. Hiring an Evaluation Consultant. Summary. Key Terms. Discussion Questions. Appendix A: American Community Survey. Glossary. References. Index.