Handbook of Practical Program Evaluation

Evaluability assessment

Any policy, program, function, project or activity with a definite set of goals has to be evaluated to reach successful implementation. Regular monitoring or evaluations serve this purpose. Evaluability assessment helps to detect and correct problems and the key steps require the total involvement of the users, clarity of program as intended by policymakers; managers and stakeholders; reality, plausibility and measurability of program goals; facilitation of accepted changes, exploration of several evaluation possibilities and agreement on priorities of evaluation and program information.

Initially the picture of the performance, the goals being attempted to be accomplished, the duration for final achievement, the problems choking the performance, the time required to solve them, the information for the assessment of performance, and what the stakeholders want must be documented. Logic models and performance indicators may then be planned. The existing program design may be changed following a general agreement to better the outcomes or alternative designs may be considered.

The impact evaluation in the family preservation programs exposed the fact that the program would never achieve the goals of the program and certain issues which were affecting the success were unearthed with suitable recommendations to change the design. However, the Tennessee Prenatal program evaluability assessment indicated that an interim evaluation which indicated that the goal set to reduce low birth weight incidence was achieved, was useful for the budget processing. Strategic planning requires evaluability assessment.

Implementation evaluation

Management reforms and public demand has promoted implementation evaluation in recent times for achieving measurable results and changing operations. Catering to the needs of the customer, delivery of exemplary programs, continuous improvement of businesses, application of evidence-based practices and increasing accountability ensure the loss of the explanatory gap between planning and implementation. Implementation evaluation allows the indication of plans that worked and that did not, to produce anticipated outcomes. Black-box implementation evaluation is done before the program starts and once more when outcomes are available.

Little is understood about how the program was delivered and what improvement could have been done in this method. The transparent box paradigm allows the study of program delivery with care, frequently assessing the implementation and linking the theory, activities, outputs and outcomes. It further evaluates the environmental and organizational factors influencing the process. The explanatory gap seen in the black box paradigm is absent here.

The practical methods are formative evaluation (in development phase), summative evaluation (after completion), process evaluation (whether services match those planned), descriptive evaluation (details of program), and performance monitoring (using feedback and performance indicators and implementation analysis (at the end of program) Core questions and methods for implementation evaluation are available for all the four stages of evaluation which are assessing the need and feasibility of the program, the planning and designing, the delivery and the improvement of the program.

Each stage has implementation evaluation methods of its own. Great emphasis is placed on front-end analysis, frequent feedback of results and follow-through activities. If managers and staff are ardent participators, the acceptance and utilization of findings would be better. The final decision on how to modify programs depends on the managers. Implementation evaluation increases accountability, provides evidence of the ongoing activities and helps senior managers and policy makers create new designs and policy directions in their efforts to improve programs and businesses.

Performance monitoring

Performance measures are quantitative indicators used to track the performance of agencies and organizations by concentrating on the programs or employee development or control over administrative overhead. This monitoring can be done for a small institution or a state with services like the transport or health. The objective is to provide information to decision makers, strengthen performance, accountability to stakeholders like the higher level managers, executive agencies, governing bodies, funding agencies, organizations providing accreditation, clients and customers.

The types of measures are resources, outputs, productivity, efficiency, service quality, outcomes, cost effectiveness and customer satisfaction. Monitoring helps enhance program management, strategic planning and management, budgeting and financial management, performance management, productivity and quality improvement, contract management and external benchmarking. The balanced scoreboard has been developed as an indicator of performance in the private and government sectors in financial performance, customers, internal business processes, innovation and learning (Kaplan and Norton, 1992).

Appropriate data sources, measures for operationalization, data processing support and quality assurance products are available for monitoring performance. Performance data are all comparative; over time or against targets or among different units or external benchmarks giving rise to time series databases or seasonal ones. Comparison would eliminate non-reliability and non-validity. Wrong interpretation of data can be misleading and cause disasters or unrealistic expectations especially in financial management. If data are not sensitive to stakeholders’ opinions, the monitoring has failed. Monitoring is a commonsense approach to results oriented management.

References

Kaplan, R.S. and Norton, D.P. (1992). “The Balanced scorecard: measures that drive performance” Harvard Business Review. Pgs 71-79.

Wholey, J. S., Hatry, H.P., Newcomer, K.E. (Eds) (2004). Handbook of practical program evaluation, 2nd ed. San Francisco: Jossey-Bass.

Cite this paper

Select style

Reference

BusinessEssay. (2022, January 26). Handbook of Practical Program Evaluation. https://business-essay.com/handbook-of-practical-program-evaluation/

Work Cited

"Handbook of Practical Program Evaluation." BusinessEssay, 26 Jan. 2022, business-essay.com/handbook-of-practical-program-evaluation/.

References

BusinessEssay. (2022) 'Handbook of Practical Program Evaluation'. 26 January.

References

BusinessEssay. 2022. "Handbook of Practical Program Evaluation." January 26, 2022. https://business-essay.com/handbook-of-practical-program-evaluation/.

1. BusinessEssay. "Handbook of Practical Program Evaluation." January 26, 2022. https://business-essay.com/handbook-of-practical-program-evaluation/.


Bibliography


BusinessEssay. "Handbook of Practical Program Evaluation." January 26, 2022. https://business-essay.com/handbook-of-practical-program-evaluation/.