Q&A 10


 

(Book ) Management of Human Service Programs

Judith A. Lewis

 

 

CHAPTER 10       EVALUATING HUMAN SERVICE PROGRAM

In the context of current public policy debates about the value of human service programs and resource challenges in the human services field, program evaluation can be expected to become increasingly relevant (Carman & Fredericks, 2008). Ultimately, evaluation is needed to let us know whether services have taken place as expected and whether they have accomplished what they were meant to accomplish. This kind of information can provide the basis for making sensible decisions concerning current or projected programs.

Program evaluation can be defined as:

The systematic collection of information about the activities, characteristics, and results of programs to make judgments about the program, improve or further develop program effectiveness, inform decisions about future programming, and/or increase understanding. (Patton, 2008, p. 39)

Patton, who is particularly interested in the utilization of evaluation findings, adds that “utilization-focused program evaluation (italics in original) is evaluation done for and with specific intended primary users for specific, intended uses” (2008, p. 39).

Human service evaluation, if it is to be of value, must be seen as an integral part of the management cycle and must be closely connected to ongoing management processes and daily activities. Its results must be disseminated to and understood by the people most concerned with program functioning, including community members, funding sources, and service providers, as well as administrators. It can be practical only if individuals who influence service planning and delivery seeit as useful.

PURPOSES OF EVALUATION

Evaluators use research techniques, applying them to the needs and questions of specific agencies and stakeholders. Evaluation can be used to aid in administrative decision making, improve currently operating programs, provide for accountability, build increased support for effective programs, and add to the knowledge base of the human services.

ADMINISTRATIVE DECISION MAKING

Evaluation can provide information about activities being carried out by the agency as well as data describing the effects of these activities on clients. Information about current activities can help decision makers deal with immediate issues concerning resource allocations, staffing patterns, and provision of services to individual clients or target populations. At the same time, data concerning the outcomes of services can lead the way toward more rational decisions about continuation or expansion of effective programs and modification or elimination of less effective ones. Decisions concerning the development of new programs or the selection of alternate forms of service can also be made, not necessarily on the basis of evaluation alone but with evaluative data making a significant contribution.

IMPROVEMENT OF CURRENT PROGRAMS

An evaluation can be used to compare programs with the standards and criteria developed during the planning and program design stages. Evaluation can serve as a tool to improve program quality if it provides data that help contrast current operations or conditions with objectives and evidence-based standards. Activities performed as part of an agency’s operations can be compared or contrasted with standardized norms, such as professional or legal mandates, or with the agency’s own plans, policies, and guidelines. Evaluation of service outcomes means that results can be compared with identified community needs, leading to an assessment of the program’s adequacy. Data collection technologies such as employee attitude surveys, management audits, quality audits, cultural competency assessments, and ethics audits can provide information that is very useful in improving program or agency operations. With systematically collected data on hand, agency personnel can make improvements either in the nature of the services or in the ways they are delivered. Although evaluation does not necessarily identify the direction an agency should take, its systematic application does point out discrepancies between current and planned situations. Without it, quality cannot be improved.

ACCOUNTABILITY

Most human service programs are required to submit yearly evaluation or monitoring reports for funding sources or public agencies, and many specially funded projects are required to spend set percentages of their budgets on evaluation. Agencies are accountable not only to funding organizations but also to the clients and communities they serve and even society as a whole. Since the 1990s an “accountability movement” in government and not-for-profit organizations has focused increased attention on adherence to laws and regulations and responsible stewardship of finances as well as effective and ethical implementation of pro- grams. The latter concern, especially regarding the accomplishment of desired program outcomes, is addressed through evaluation.

According to Schorr (1997), in the past “outcomes accountability” and evaluation were separate activities, the former the province of administrators and auditors and the latter of social scientists. Now, however, “the accountability world is moving from monitoring processes to monitoring results. The evaluation world is being demystified, its techniques becoming more collaborative, its applicability broadened, and its data no longer closely held as if by a hostile foreign power” (p. 138). Dissemination of evaluation reports describing the agency’s activities and their effects can help reinforce program accountability. People concerned with agency performance can gain knowledge about the results of services, and this information undoubtedly increases community members’ influence on policies and programs.

BUILDING INCREASED SUPPORT

Evaluation can also enhance an agency’s position by providing the means for demonstrating—even publicizing—an agency’s effectiveness. One realistically fears that evaluation results may show that a program has no effects or negative effects, an issue that is addressed later in this chapter. This is a legitimate concern, but a responsible agency administrator will welcome evaluation results that could help improve programs, as well as positive results that could showcase the accomplishments of programs.

Evaluation provides information that helps the agency gain political support and community involvement. Evaluative data and analyses can enhance the agency’s well-being if they are disseminated to potential supporters and funding sources as well as to agency staff.

ACQUIRING KNOWLEDGE REGARDING SERVICE METHODS

Much of what is termed “program evaluation” has historically consisted of routine monitoring of agency activities. This approach is still common, but fortunately increasing attention is being paid to the assessment of program outcomes. Additionally, program evaluation methods can be used to develop knowledge about the relationships between interventions and desired outcomes. New knowledge regarding program effectiveness has historically been associated with experimental research— typically randomized control trials, which will be briefly mentioned later—testing the efficacy (done in a controlled setting such as a lab) and then effectiveness (done in a program setting).

Controlled experiments help determine whether clearly defined program tech- neologies can lead to measurable client changes. Although such research-oriented studies rarely take place in small agencies with limited resources, they do play a major part in establishing the effectiveness of innovative approaches. Program designers need to be able to make judgments concerning the effects of specific services. Knowledge concerning such cause-and-effect relationships can be gained through reviewing research completed in other settings, carrying out ongoing internal evaluations, and utilizing the services of researchers to implement special studies of program innovations.

More feasible than controlled experiments, given limited resources for large-scale experimental designs, are recent developments in evidence-based practice and best practices benchmarking. Evidence-based practice and best practices benchmarking were discussed in Chapter 3 as important aspects of program design. They are mentioned here to emphasize the importance of explicit documentation of program operations to enable organizational learning and knowledge development.

PRODUCERS AND CONSUMERS OF EVALUATIONS

An elegantly designed evaluative research study is of little use if the people who have a stake in an agency’s efforts do not recognize it as important. Evaluation efforts involve a number of groups, referred to here as stakeholders, including not only professional evaluators but also funding sources, administrators and policy makers, service providers, clients or consumers, and community members. These groups serve as both producers and consumers of evaluations, and they can have tremendous influence on the process if they see themselves as owning it. Patton’s (2008) utilization-focused evaluation principles and methods have been shown to be very useful in designing and implementing evaluations so that the findings are actually used.

Historically, the various actors in the evaluation process—professional evaluators, funding sources, policy makers, administrators, and service providers—had separate roles and did not often collaborate, or even communicate, with each other. These role distinctions are blurring, however, as practitioners realize they need knowledge about their programs to make improvements and as policy makers and others realize that the complexity of the evaluation process requires broad- based involvement of all key stakeholders. Two evaluation approaches to be dis- cussed later have addressed this issue: empowerment evaluation and participatory evaluation.

PROFESSIONAL EVALUATORS

A sizable proportion of the evaluation that takes place in human service organizations is performed by professional evaluators, researchers who use their skills either within the evaluation and research departments of large agencies or as external consultants offering specialized assistance. Whether evaluators are employed by the organization or contracted as consultants, they are expected to bring objectivity to the evaluation process. Their presence brings to the evaluation task a degree of rigor and technical excellence that could not be achieved by less research-oriented human service providers.

At the same time, evaluators need to fully engage in dialogue with the client agency and any other stakeholders to ensure that their approach is appropriate for the particular setting. At its worst, an evaluation can focus on the wrong variables, use inappropriate methods or measures, draw incorrect conclusions, or simply be irrelevant to ongoing agency work. Evaluators may produce reports that, although accurate, are too esoteric to be readily understood or used by the people who decide among programs or allocate resources. Evaluators who are overly detached from agency decision making often fail to affect services.

Another negative aspect of the use of external consultants as evaluators is agency workers’ tendency to place evaluative responsibility totally in the consultants’ hands. Evaluation can work effectively only if attention is paid to ongoing collaboration with and involvement of agency staff. If no one but the expert evaluator takes responsibility for assessment of progress toward goals, workers will see evaluation as unfamiliar, threatening, and potentially unpleasant.

Effective evaluators use their technical expertise not to impose evaluation on unwilling audiences but to work closely with others in developing feasible designs. Thomas (2010) suggests that the best approach is collaboration between outside evaluators and agency staff. The agency and the evaluator should have a clear agreement regarding the goals and methods of the evaluation and the specific roles to be played by the consultants and staff. If consultants work with internal evaluation committees, they can help administrators, service providers, and consumers clarify their goals, expectations, and questions so that the evaluation will meet identified needs. The external evaluator’s objectivity and internal agency workers’ active involvement bring the best of both worlds to the evaluation process.

FUNDING SOURCES

Funding sources, particularly organizations providing grants or contracts to human service agencies, can have a positive effect on evaluation. Human service agencies are often required to evaluate projects as part of their accountability to funding sources. Grant applications are expected to include discussions of evaluation designs, and these sections are carefully scrutinized before funding decisions are made.

Funding sources could have even more positive effects if attention were focused more on evaluation content rather than simply on form. Funders should not expect that the dollar amount spent on evaluation consultants necessarily coincides with the quality of the research, nor should they accept simple process monitoring as sufficient. Rather, funding sources should press for more effective evaluation of program effectiveness, for both direct consumers and communities.

POLICY MAKERS AND ADMINISTRATORS

Policy makers and administrators are among the primary users of evaluation because they make decisions concerning the fates of human service programs. Decision makers need evaluation data to inform them of alternatives, just as evaluators need decision makers to make their work meaningful.

Agency managers, as well as board members, can make evaluation work more effectively for them if they try to identify the real information needs of their agencies. Evaluations do not have to be fishing expeditions. They can be clear-cut attempts to answer the real questions decision makers pose. If administrators and objective evaluators work together to formulate research questions, the resulting answers can prove both readable and helpful.

HUMAN SERVICE PROVIDERS

A football coach in the 1960s (probably Darrell Royal of the University of Texas) said that his teams rarely passed the ball because “when you pass, three things can happen, and two of them are bad.” In a similar way, there can be three outcomes of an evaluation, and two of them would be seen by staff as “bad”: the evaluation could show that the program made no difference, made things worse for clients, or made desired improvements for clients. It is understandable that staff may feel threatened at the prospect of an evaluation. This is even more likely because providers of services have often been left out of the evaluation process. Involving staff and other stakeholders in the design and implementation of the evaluation can mitigate such concerns and will probably also result in a better evaluation process through the use of the program knowledge of these stakeholders.

Staff members may also feel victimized by evaluation. They are typically asked to keep accurate records and fill out numerous forms, but they are not involved in deciding what kinds of data are really needed. They are asked to cooperate with consultants making one-time visits to their agencies, but they are not told exactly what these consultants are evaluating. They are asked to make sudden, special efforts to pull together information for evaluators to use, but they are not encouraged to assess their progress toward goal attainment on a regular basis. Many human service workers feel that evaluation is a negative aspect of agency operations, serving only to point out shortcomings in their work, and they tend to provide information in such a way that their own programs are protected.

Human service providers could play a much more active and useful role in evaluation if they were involved in the design and implementation of the evaluation, using consultants primarily as technical assistants. Service providers are familiar with changing consumer needs, the relative effectiveness of varying approaches, and the agency itself. Through their involvement with an evaluation committee, they can ensure that the real goals of their programs, the objectives being evaluated, and the work actually being done are all properly addressed. As agencies move increasingly toward becoming learning organizations, as discussed in Chapter 9, staff are more likely to appreciate the value of evaluation in improving their operations and showing the outside work the value of their programs.

CONSUMERS AND COMMUNITY MEMBERS

Consumers and other community members need to be involved in planning and evaluating, from initial goal setting through developing evaluation designs and assessments of program effectiveness. Consumers are in a good position to be aware of the strengths and weaknesses of service delivery systems and the degree to which observed community needs are being met. Current principles of empowerment of staff, clients, and community members in the human services (Hardina et al., 2006) support involvement of these stakeholders in the process.

Regardless of the form their participation takes, citizens have a major role to play in deciding how, why, and for whom human services should be provided. Human service agencies are accountable to the communities they serve. Agency managers have a responsibility to ensure that their programs work to accomplish goals that both staff and consumers understand and value.

THE SCOPE OF HUMAN SERVICE EVALUATION

Human service evaluation can take many forms. The approach used in any one set- ting is likely to be a function of several variables, including (a) the resources and expertise available for use in evaluations, (b) the purposes for which evaluation results will be used, and (c) the orientations and philosophies guiding agency decision makers. Program evaluations may be categorized in two ways. An evaluation may be categorized based on its purpose: a summative evaluation looks at a pro- gram’s accomplishments or results, typically at or near program completion; a formative evaluation occurs during program operation and is intended to provide feedback and information that staff can use immediately to make program changes and improvements. Evaluations may also be categorized as process evaluations or out- come evaluations. Using the systems approach to program design from Chapter 3, process evaluations focus on activities or outputs: the types and numbers of services that are provided. Outcome evaluations look at intermediate or final outcomes: how client conditions, skills, or knowledge have changed as a result of the program.

Human service programs vary tremendously in their approaches to evaluation, running the gamut from simple program monitoring to controlled experiment studying client outcomes. Regardless of their use of resources, depth, or concern for objectivity, however, evaluation efforts need to be reasonably comprehensive if they are to serve any of their stated purposes. Evaluation should provide, at a minimum, basic information concerning program processes and outcomes. Multiple data collection methods and measures (discussed later) will be needed on nearly any substantive evaluation.

TYPES OF EVALUATIONS

Essentially, program evaluation has four basic objectives:

  1. To provide information about the achievement of the program goals and objectives (outcome evaluation)
  2. To provide descriptive information about the type and quantity of program activities or inputs (process evaluation)
  3. To provide information that will be useful in improving a program while it is in operation (a formative evaluation)
  4. To provide information about program outcomes relative to program costs (cost effectiveness), costs per output (unit costs, or efficiency), or financial benefits (cost benefit)

We will first review these four types of evaluation and will then address evaluation methods, followed by a discussion of a process for conducting an evaluation There are three general types of outcomes: individual, or client-focused, outcomes; program and system-level outcomes; and broader family or community outcomes (W. K. Kellogg Foundation, 2004). Individual client outcomes are the most com- mon focus of a program and its evaluation. An individual client outcome such as having a former foster youth obtain independent living and a job adequate for self-support could also be part of a program with a system outcome of improving the quality of life for former foster youth. More broadly, family outcomes might include increased parent-child-school interactions or keeping children safe from abuse. A community outcome might be increased civic engagement in a low-income community.

Ultimately, then, at the program level the basic question underlying outcome evaluation must be, “To what degree have clients or the community changed as a result of the program’s interventions?”

Client change can be evaluated in terms of level of functioning before and after receipt of services. Whether services are designed to affect clients’ adjustment, skills, knowledge, or behaviors, some type of assessment tool must be used to determine whether change in the desired direction has taken place. Outcome evaluation requires the routine use of measures such as gauges of behavior change and standardized or specially designed instruments. If a program has been well designed (Chapter 3) and has a complete information system (Chapter 9), it will address all of the elements listed except a plan for use of results. This final point will be covered later, when a program evaluation process is presented.

PROCESS EVALUATION

Process evaluations can assess the extent to which a program is implemented as designed and provide a means for determining whether members of target populations were reached in the numbers projected and whether specified services were provided in the amounts required at the quality level expected. As in the case of an outcome evaluation, the program’s logic model and objectives provide a valuable foundation for the process evaluation.

A specific type of process evaluation is a formative evaluation. As noted, formative evaluations occur during program implementation, whereas summative evaluations are done at the end of a program or a program cycle. A formative evaluation is intended to “adjust and enhance interventions … [and] serve more to guide and direct programs—particularly new programs” (Royse, Thyer, & Padgett, 2010, p. 112).

Using qualitative methods such as interviews, a formative evaluation can also assess how the program implementation process is proceeding, suggesting possible changes in implementation. This type of process evaluation provides funders, the agency board, and any other stakeholders information on how the program is doing with reference to previously identified objectives and standards and also helps agency administrators make adjustments in either the means or the targets of service delivery. Feedback mechanisms must be built into the service delivery system to keep managers informed regarding whether the program is on course, both fiscally and quantitatively.

Process evaluations are usually ongoing; that is, they require the continual retrieval of program data. A process that funding organizations use to receive regular

reports on program implementation is known as monitoring. Program monitoring,

according to Rossi et al. (2004), is:

the systematic documentation of aspects of program performance that are indicative of whether the program is functioning as intended or according to some appropriate standard. Monitoring generally involves program performance in the domain of program process, program outcomes, or both. (p. 64)

Program goals and objectives are used as the standards against which the evaluation is conducted. If, for example, Meals on Wheels states in its annual plan of operations that it will deliver 1 meal daily to each of 100 clients per program year, or an annual total of 36,500 meals, then it would be expected that approximately 3,042 meals will be provided per month. A process evaluation would entail the assessment of monthly efforts to provide the prorated number of meals, including whether they were provided to eligible clients (for example, the target population). This type of evaluation would also examine how the agency’s human resources were used to provide the services.

A monitoring process typically includes a representative of the funding organization who is assigned to track implementation of the funded program as well as involvement from designated program staff, usually the program manager and a fiscal officer.

A final type of process evaluation is known as quality assurance (Royse et al., 2010, pp. 132–134). This answers the question “Are minimum and accepted standards of care being routinely and systematically provided to patients and clients?” (Patton, 2008, p. 304). This technique is most commonly associated with the assessment of medical or clinical records and other aspects of the operation of a program or facility that needs or desires accreditation. Governmental organizations such as Medicare and accrediting organizations such as the Joint Commission on Accreditation of Healthcare Organizations and the Council on Accreditation of Family and Children Services issue standards.

EFFICIENCY AND EFFECTIVENESS

The data gathered through outcome and process evaluations are sometimes used to measure efficiency and effectiveness. Efficiency is a measure of costs per output, often framed as unit cost. For example, a program that can deliver more hot meals to home-bound seniors for the same cost is seen as more efficient than a program with higher costs for providing the same number of meals. Effectiveness, on the other hand, measures cost per outcome, often described as cost effectiveness. Here, the measure is the cost per successful service outcome, such as gaining employment for an at-risk teenager. A program that arranges jobs of a defined quality for a certain number of youth for a certain cost per job is more cost effective than a similar program that gets jobs for fewer youth at the same cost or has higher costs to acquire jobs for the same number of youth.

The simplest efficiency evaluation involves the determination of unit cost. This figure is obtained by dividing the number of service outputs into the amount of dollars allocated (input) for that service. For example, an agency receiving $150,000 per project period to provide counseling services to 150 delinquents children per year could project a unit cost of $1,000 if the unit of service were defined as each unduplicated client (child) served. Of itself, the cost per unit of $1,000 is meaningless without accompanying process and outcome evaluations and without a comparison to at least one other, similar program whose services have also undergone process, outcome, and efficiency program evaluations. In this example, if the outcome is preventing recidivism for at least one year after the completion of the program, the cost of the program can be divided by the number of successful outcomes to determine cost effectiveness on that measure. This becomes more complicated if a program has more than one service and more than one outcome. Ideally, a program will have one overriding outcome and only one major service component, making this analysis manageable.

Royse et al. (2010, pp. 258–260) have listed the steps of a cost-effectiveness study. The first three steps should already have been done as part of good program design and implementation. Defining the program model and outcome indicators is the first step. The second step involves developing hypotheses or study questions. For example, a simple question would be, what were the program costs compared to the program results? The third step is computing costs, mostly accomplished through the development of the program budget. This step can be complicated if one program has multiple groups of clients and service packages, but eventually it should be possible to allocate all program costs (staff salaries and benefits, facilities, other non-personnel costs) so that they may be related to program outcomes. The fourth step, collecting outcome data, should already be occurring through the program’s information system. Step five involves computing program out-comes, which would generally be the number of clients for whom there were successful outcomes (for example, no recidivism or rehospitalization, acquisition of self-sustaining employment or independent living status). Next, computing the cost-effectiveness ratio is done by dividing program cost by the number of successful outcomes. The final step, conducting a sensitivity analysis, involves looking at the assumptions about the relationships among program interventions, costs, and effects. For example, if some clients do not attend all assigned sessions, outcomes would not be expected to be as favorable as for clients who attend all sessions.

Less common and beyond the scope of the discussion here is cost-benefit analysis (Levin, 2005; Royse et al., 2010, pp. 262–265). This goes beyond cost effectiveness by attributing a financial value to the outcome, thus seen as a benefit to society.

A final aspect of effectiveness takes a much broader perspective. Although it is beyond the scope of this book, which focuses on programs, it should be noted that many human service programs are funded and implemented to have a broader effect on social conditions such as ending chronic homelessness or improving com- munity well-being. National evaluations in areas such as welfare reform to increase self-sufficiency of poor families sometimes focus on this level. Such evaluations look at outcomes such as rates of homelessness, but another way to examine results at this level is to assess adequacy of services. For example, if a metropolitan area has 1,000 foster youth who emancipate each year by turning 18 and there are only programs to fund services for 250 youth, this becomes a social policy issue in terms of the adequacy of support to fully address an identified problem.

EVALUABILITY ASSESSMENT

Before reviewing the actual design and implementation of an evaluation, evaluability assessment will be presented here as a unique type of evaluation.

If a program has been thoroughly and thoughtfully designed and implemented, including the use of evidence-based practices, logic models, well-written goals and objectives, and a complete management information system, a program evaluation can be relatively easy. Although the human services field has made tremendous progress in recent decades regarding the design and implementation of programs, there are still many cases in which a program that has been in operation for some time is not configured in a way that makes evaluation easy. For this reason, a preliminary step in the program evaluation process may be to do an evaluability assessment: “a systematic process for describing the structure of a program and for analyzing the plausibility and feasibility of achieving objectives; their suitability for in-depth evaluation; and their acceptance to program managers, policy-makers, and program operators” (Smith, 2005, p. 136).

When evaluability assessment emerged in the 1970s, the purpose was “to assess the extent to which measurable objectives exist, whether these objectives are shared by key stakeholders, whether there is a reasonable program structure and sufficient resources to obtain the objectives, and whether program managers will use findings from evaluations of the program” (Trevisan, 2007, p. 290). Trevisan found common recommendations that pointed to weaknesses in program design and implementation in his review of the literature on evaluability assessment. These recommendations included “revised goals and objectives, the development of a mission statement, alteration of program components, and increased stake- holder understanding and awareness of the program” (2007, p. 295).

According to Chambers, Wedel, and Rodwell (1992), an evaluability assessment should begin by assessing the purpose and rationale of the evaluation. All key stake- holders (for example, agency staff, funders, representatives of policy makers, and com- munity representatives) need to agree regarding expectations, questions, and goals for the evaluation. This typically happens through interviews by the evaluator with the stakeholders. Furthermore, the program should be assessed to see if it has a clear logic model, well-written goals and objectives, and data that can show the extent to which the program was implemented as designed. These elements can be assessed by reviewing documents such as relevant proposals or contracts and agency plans and information systems, and by meeting with stakeholders. If misunderstandings, disagreements, or a lack of clarity arises, these need to be addressed before the evaluation proceeds.

After staff members review the evaluability assessment findings, the model may be amended to ensure that it reflects the reality of program operations. It is also possible that, if discrepancies in program implementation are discovered, staff may change actual program processes to better reflect the model as designed. If a pro- gram has been well designed using a valid theoretical model, is appropriately staffed, and has a complete information system, it is likely to be evaluable without further modification. If not, appropriate program and information systems modifications can be made. At the conclusion of a thoughtful evaluability assessment, there should be a well-conceptualized and well-operationalized program that will be relatively easy to evaluate, in terms of both its processes and its outcomes.

ALTERNATIVE WAYS TO FOCUS EVALUATIONS

Before leaving this discussion of evaluation types, we should note some important concerns regarding an overemphasis on evaluability and goals in traditional terms. According to Schorr (1997), some funders may consider programs as evaluable only if they are

standardized and uniform, … sufficiently circumscribed that their activities can be studied and their effects discerned in isolation from other attempts to intervene and from changes in community circumstances, … [and] sufficiently susceptible to outside direction so that a central authority is able to design and prescribe how participants are recruited and selected. (pp. 142–143)

Patton (2008, p. 273) frames this issue in terms of “problems with goal-based evaluation.” These concerns can be mitigated by augmenting traditional evaluation methods with qualitative models such as case studies and methods including interviewing of clients or other key informants, participant observation, and analysis of documents that are more attentive to program complexities and unique characteristics.

Patton (2008, pp. 304–305) offers an extensive menu based on “focus or type” of evaluation that lists the types just discussed and several newer approaches. Two of these will be mentioned here as ways of implementing traditional methods that are augmented by newer ideas about the philosophy of an evaluation. These are necessarily brief summaries; and managers wanting to consider using these approaches should seek more detailed information on them.

PARTICIPATORY EVALUATION

Participatory evaluation “is generally used to describe situations where stakeholders are involved in evaluation decision making as well as share joint responsibility for the evaluation report with an external evaluator” (Turnbull, 1999, p. 131). A variation of this approach is practical participatory evaluation (Smits & Champagne, 2008).

Patton (2008, p. 175) has summarized principles of participatory evaluation:

  • The evaluation process involves participants learning evaluation logic and skills …
  • Participants in the process own the evaluation. They make the major focus and design decisions…. Participation is real, not token.
  • Participants focus the evaluation on processes and outcomes they consider important and to which they are committed.
  • Participants work together as a group, and the evaluation facilitator supports group cohesion and collective inquiry.
  • All aspects of the evaluation, including the data, are understandable and meaningful to participants….
  • Internal, self-accountability is highly valued…. • The evaluator is a facilitator, collaborator, and learning resource; participants

are decision makers and evaluators. •    Status differences between the evaluation facilitator and participants are

minimized.

As this list suggests, participatory evaluations can only be viable in a receptive con- text, including evaluators being committed to a participatory process (Whitmore, 1998).

EMPOWERMENT EVALUATION

Empowerment evaluation (Fetterman &Wanders man, 2005; Fetterman, 2002) includes ten principles: improvement, community ownership, inclusion, democratic participation, social justice, community knowledge, evidence-based strategies, capacity building, organizational learning, and accountability. According to Patton (2008), “empowerment evaluation is most appropriate where the goals of the program include helping participants become more self-sufficient and personally effective” (p. 179).

EVALUATION DESIGN AND IMPLEMENTATION

Now that purposes and types of evaluations have been reviewed, we will present a process that can be used to design and implement the evaluation. This will be presented in linear steps, but the process is in fact more fluid: the sequence of activities may vary, activities will happen simultaneously, and steps may be repeated based on emerging developments. Ideally, much of this would be done at the program design stage, anticipating evaluation needs that will emerge later.

EVALUATION TEAM FORMATION

An evaluation depends on the active involvement, or at least support, of all workers in the program. Staff should not think of evaluation as a separate function that is performed by experts and unrelated to the work of the agency’s programs. For a full, formal evaluation of a program, an evaluation team to design and oversee the process will help ensure that all in the program are aware of the evaluation and are committed to its successful implementation. Agency managers who are in charge of the evaluation should identify stakeholders inside and outside the agency who will bring relevant knowledge, expertise, and support to the process. Representatives from service delivery staff, supervisors, and administrative staff will ensure that internal program concerns are addressed. It may also be useful to invite outside stakeholders such as clients, community members, and, if appropriate, representatives of the funding agency for the program. If there will be outside evaluators designated by agency administration or the funding source of the program, they should be actively involved in working with the team.

Roles should be determined for each member to address tasks such as identify- ing data sources, gathering data, compiling and analyzing data, report preparation, and coordination functions such as scheduling meetings and managing the timeline for the evaluation. If a professional evaluator has not been hired, staff members may consider whether or not they will need outside expertise such as a professional evaluation consultant or qualified faculty member from a local university. Dowell, Haley, and DoinoIngersoll (2006) have developed criteria for assessing evaluation consultants. In a large-scale evaluation, the formation of a community advisory board to review the process and provide input may be useful.

This would also be a good time to consider the use of participatory processes such as participatory or empowerment evaluation.

Staff should assess the resource needs for the evaluation, including staff time and any extra funds, such as for an evaluation consultant or purchase of measurement instruments, and ensure that these resources are made available.

ASSESS READINESS

There are two key aspects to being ready for an evaluation. Readiness in terms of program operations means having a clear logic model and goals and objectives, and an information system that gathers data to track implementation. The evaluability assessment discussed earlier is a way to see if the program is easily evaluable. If this assessment shows limitations, it will be necessary to go through the program design, budgeting, and information systems processes to create conditions for a good evaluation.

The second aspect of readiness involves staff. As noted earlier, staff, especially service delivery staff, may be wary of an evaluation, especially if it is coming from an outside source such as a funding organization. Managers have an essential role in creating an organizational culture that is supportive of organizational learning (discussed in Chapter 11) and enables staff to feel comfortable about proceeding with an evaluation. Managers may need to spend extra time here meeting with staff to discuss the purpose of the evaluation, how it will be conducted, and how the results will be used. Including broad representation of staff on an evaluation team can be a great help in addressing staff concerns.

DETERMINE EVALUATION QUESTIONS AND THE FOCUS OF THE EVALUATION

At this stage, certain things should be presented for discussion and finalization: objectives of the evaluation; any expectations from stakeholders, for example, including requirements if a funding organization; and study hypotheses. It will also be important to clarify how the findings of the study will be used.

IDENTIFY ANY ADDITIONAL DATA NEEDED

As noted earlier, if a program has been well designed and well implemented, all or nearly all the data needed for the evaluation may already exist in the agency’s information system. Data not yet in the system may include follow-up data or feed- back from clients or other stakeholders. If an evaluation was planned for when the program was designed, any pretest data will have been anticipated. For example, standardized instruments may be used at intake, with a posttest done at program completion to assess changes in variables under consideration such as depression or other psychological or behavioral characteristics.

Sources of data should be considered here as well. The information system, ideally including data entered into computerized software that can compile individual client data into reports, and client case records will be key sources.

DETERMINE THE EVALUATION DESIGN AND METHODS

Several decisions to be made here go back to the earlier discussion of evaluation types. In most cases, both process and outcome methods would be used, with the process component focusing on the nature of program implementation and services delivered, and the outcome focus assessing impacts. Depending on the evaluation purposes and questions, monitoring or quality assurance techniques would be appropriate. Formative and summative options could be considered. Often a formative evaluation takes place during the project and the summative evaluation occurs at the end. If there are evaluation questions repeated to cost effectiveness, efficiency, or cost benefit, these methods would be needed.

In terms of actual data collection methods, distinctions are made between quantitative methods and qualitative methods. Recently, a combination of both known as mixed methods has become very common. Historically, quantitative methods have been seen, at least by many researchers, as “better” than qualitative methods. Patton describes this as the “paradigm war” between “quants” and “quals” (2008, p. 420). In recent years, there is growing agreement that it is not a matter of quantitative versus qualitative but rather a matter of choosing the best method or combination of methods to answer the evaluation questions. For many, including those using evidence-based practice, the “gold standard” in quantitative evaluation has been the randomized control trial, ideally replicated in different populations. Such designs are not common in regular agency operations for reasons including cost (e.g., for follow-up contacts) and ethical issues such as concerns about denying treatment to some subjects.

 Summary

Evaluations may have several purposes, from aiding in decision making and improving programs to building support and demonstrating account- ability. Evaluations may look at processes, out- comes, or efficiency, with increasing interest in outcome evaluation. Ultimately, the best evaluation uses methods that are appropriate to specific research questions and program conditions. Such an evaluation is likely to address processes and out- comes, quantitative and qualitative aspects, and dynamics of program uniqueness and generalizability. The actual utilization of evaluation findings for program enhancement and the involvement of all significant actors in the evaluation process are other key considerations that human service managers must address in the quest for organizational excellence.

Our discussion of evaluation takes us full circle. In Chapter 2 we discussed the social problems in our environment that human services are intended to address, and then discussed agency planning and program design to respond to these needs. The design of the overall organization, effective human resources and supervision practices, and financial and information systems to monitor progress and accomplishments were presented as important success factors for goal attainment. Finally, evaluation is used to assess the extent of program accomplishments and to identify opportunities for continuing improvement.