← Home

An Evaluative Approach for All Initiatives

By Brother Dallas Wilson

An Evaluative Approach for All Initiatives

What is an Upfront Evaluation?

An Upfront Evaluation is a continuous process of defining, documenting, and measuring program planning, action and results from the beginning of the process to its completion.

What are the benefits of this type of Evaluation?

An Upfront Evaluation,

  • will ensure the likelihood of you achieving your goals if they are defined and measured
  • supplies you with tools for acquiring funding and sustained support
  • enables you to spell out your values
  • supports whether or not your activities are reaching and benefiting the intended group(s)
  • indicates the effectual components of your programs and the areas that need further improvement
  • gives testimonial your efforts and permits you to inform yourself and others about what - was successful and what was not successful.
  • enables you to make a good decision as to whether your initiative and programs are working as planned. (are you impacting the target group or community).

The purpose of an Upfront Evaluation is not to judge individual performance but is to highlight program successes as well as discover areas that need improvement.

  • This Upfront Evaluation begins with the planning stage and continues throughout the initiative
  • The Upfront Evaluation verifies that you are doing what you intended to do.
  • The Upfront Evaluation is about choosing your strategies based on your desired outcomes and goals.
  • The Upfront Evaluation quantifies the results of programs undertaken
  • The Upfront Evaluation documents when you have done

Five Steps to Upfront Evaluation

  1. Articulate the problem clearly

  2. Identify what you want to accomplish

  3. Design an action plan

  4. Assess whether you followed your action plan

  5. Measure the changes that occurred as a result of the assessment

    • Visioning: Describe your vision for what marriages and (children) will be like in the future.
    • Undesirable Situations: Identify the circumstances that are blocking your vision and the opportunities to change these
    • Existing Resources: Identify resources that already exist in your community
    • Goals: Define what you want to change to achieve your vision. Express this change as if it has already happened.
    • Target: Define whom you want to be affected by this change
    • Outcome Statements: State how you will know when the desired change has occurred.

Section 1

Articulate (Develop a Public Persona)

The Articulation Section is the “explanation” of the summary and should be a brief overview of your entire grant proposal (300 words at most). It should contain “concise” statements to introduce or get the reader started on what you will be proposing.

The following are the components of this section:

  • Identification of the Problem
  • How are you going to approach the Problem?
  • Offer a solution
  • Outline the solution (Goals)

Section 2

Identify What the Process Will Accomplish (make a distinction between activities)

Visioning (Your Dream) – the sharing of ideas about what they want the out of the process to look like positively.

The purpose of visioning is to formulate a positive focused future, rather than a negative one. Visioning is a way for people to communicate what they value and to brainstorm new possibilities for their community. An effective vision, stated in simple terms, should include: the collective values of the community and are inclusive of the community’s diverse population; denotes the high standards of excellence and achievement within the community; it encourages the commitment of community member, and most importantly should be in present tense and address a time of between 5 –10 years in duration.

Vision statements often address several different areas of life such as emotional, financial, and social well-being. To clarify the areas of life that you wish to address through your program, it is important to identify the themes within your vision statement.

Undesirable Situations (unfavorable conditions) – identify the circumstances that are blocking your vision

The undesirable circumstances you identify will help you determine the focus of your intervention efforts. You must be clear about what circumstances are blocking your vision before you can know what you want to change and how to plan improvement.

Within every undesirable situation, there is an opportunity that exists.

Existing Resources (assessable assets in the immediate community and others) – focusing on the resources and assets that can be utilized in effecting positive change.

Identifying existing resources is important because:

It assists you in carrying out your plans and you’ll avoid duplication

What do existing resources look like? (don’t focus on the negatives at all)

Organizations that have similar missions and purposes; similar outreach activities; possible funding sources; the Internet; media exposure and skills within your target community.

Goals (a broad description of what the program intends to accomplish) – goals are statements that describe the changes you want to make to strengthen the institution of marriage. Goals are what must be accomplished for your vision to be achieved.

Goals define what you want to accomplish to achieve your vision. Goals must be realistic and attainable, clearly stated, and must directly relate to the theme of your vision. Lastly, goals must describe the desired future condition. Goals should raise awareness; should increase the knowledge of certain “things”; should enhance, creating or protecting resources; creating new community structures or processes; goals should change ideas at specific situations; and should change the behavior of an individual, family, group or an environment.

Target Population (the beneficiaries of our accomplishments) – The specific group of people or persons your program intends to directly affect.

Primary Target Population: represents those people whom your program will attempt to affect directly.

Secondary Target Population: represents those who are indirectly affected by the program through their contact with the primary target population.

How do select your target group? Location and access to the population, the need of that population, and how that population impacts the program design, implementation, and evaluation.

Outcome Statements (were we successful, if so how much?) – A specific definition of what the future will look like after the change has occurred. Outcome statements describe specific, measurable results that let you know if you are realistically achieving your goals.

Section 3

Design Your Action Plan

When writing an action plan to achieve a particular goal or outcome, you can get much help from the following steps.

  • Clarify your goal. Can you get a visual picture of the expected outcome? How can you see if you have reached your destination? What does make your goal measurable? What constraints do you have, like the limits on time, money, or other resources?

  • Write a list of actions. Write down all actions you may need to take to achieve your goal. At this step focus on generating and writing as many different options and ideas as possible. Take a sheet of paper and write more and more ideas, just as they come to your mind. While you are doing this, try not to judge or analyze.

  • Analyze, prioritize, and prune. Look at your list of actions. What are the necessary and effective steps to achieve your goal? Mark them somehow. After that, what action items can be dropped from in the plan without significant consequences for the outcome? Cross them out.

  • Organize your list into a plan. Decide on the order of your action steps. Start by looking at your marked key actions. For each action, what other steps should be completed before that action? Rearrange your actions and ideas into a sequence of ordered action steps. Finally, look at your plan once again. Are there any ways to simplify it even more?

Monitor the execution of your plan and review the plan regularly. How much have you progressed towards your goal by now? What new information you have got? Use this information to further adjust and optimize your plan.

What is planning and why you need to plan

Planning is one of the most important project management and time management techniques. Planning is preparing a sequence of action steps to achieve some specific goal. If you do it effectively, you can reduce much the necessary time and effort of achieving the goal.

A plan is like a map. When following a plan, you can always see how much you have progressed towards your project goal and how far you are from your destination. Knowing where you are is essential for making good decisions on where to go or what to do next.

One more reason why you need planning is again the 80/20 Rule. It is well established that for unstructured activities 80 percent of the effort gives less than 20 percent of the valuable outcome. You either spend much time deciding what to do next, or you are taking many unnecessary, unfocused, and inefficient steps.

Planning is also crucial for meeting your needs during each action step with your time, money, or other resources. With careful planning, you often can see if at some point you are likely to face a problem. It is much easier to adjust your plan to avoid or smoothen a coming crisis, rather than to deal with the crisis when it comes unexpectedly.

Setting goals and objectives

In many situations, people use the words goals and objectives as interchangeable. Yet, in the context of goal setting, the difference between goals and objectives has an important practical meaning.

After you set your important goals you move to set objectives. Objectives are also goals, but they are down the hierarchy. They are sub-goals set with the only purpose to serve your goals.

To achieve your goals, which conditions should you provide, which resources should you collect, which skills should you develop, what knowledge should you acquire? Is there anything significant you should achieve before you can reach your goals? Formulate the answers to these questions as your objectives, in writing.

Note that objectives are also more than just activities. They still contain some challenges in them. Activities are things that you just do.

So, while a particular goal is important to you on its own, objectives and activities are important too, but not on their own. If an objective or activity does not work to help to achieve your goals, change or replace that objective so that it does.

To achieve success, you need both persistence and flexibility. When you face difficulties and unexpected problems, use all your persistence and determination to stick to your goals. But always stay flexible with your objectives and activities. If the way you do things now does not work, try another way. Keep trying until you find the one that works.

Don't change the ends, change the means. And never forget the difference between ends and means, between goals and objectives.

Section 4

Assess Whether You Followed Your Action Plan

Definition of assess Pronunciation: (u-ses'), [key] —v.t.

  1. to estimate officially the value of (property, income, etc.) as a basis for taxation.
  2. to fix or determine the amount of (damages, a tax, a fine, etc.): The hurricane damage was assessed at six million dollars.
  3. to impose a tax or other charge on.
  4. to estimate or judge the value, character, etc., of; evaluate: to assess one's efforts.

Decision-making skills and techniques

We use our decision-making skills to solve problems by selecting one course of action from several possible alternatives. Decision-making skills are also a key component of time management skills.

Decision-making can be hard. Almost any decision involves some conflicts or dissatisfaction. The difficult part is to pick one solution where the positive outcome can outweigh possible losses. Avoiding decisions often seems easier. Yet, making your own decisions and accepting the consequences is the only way to stay in control of your time, your success, and your life. If you want to learn more about how to make a decision, here are some decision-making tips to get you started.

A significant part of decision-making skills is in knowing and practicing good decision-making techniques. One of the most practical decision-making techniques can be summarized in those simple decision-making steps:

  1. Identify the purpose of your decision. What is exactly the problem to be solved? Why it should be solved?
  2. Gather information. What factors does the problem involve?
  3. Identify the principles to judge the alternatives. What standards and judgment criteria should the solution meet?
  4. Brainstorm and list different possible choices. Generate ideas for possible solutions. See more on extending your options for your decisions on my brainstorming tips page.
  5. Evaluate each choice in terms of its consequences. Use your standards and judgment criteria to determine the cons and pros of each alternative.
  6. Determine the best alternative. This is much easier after you go through the above preparation steps.
  7. Put the decision into action. Transform your decision into a specific plan of action steps. Execute your plan.
  8. Evaluate the outcome of your decision and action steps. What lessons can be learned? This is an important step for the further development of your decision-making skills and judgment.

Final remark: In everyday life we often have to make decisions fast, without enough time to systematically go through the above action and thinking steps. In such situations, the most effective decision-making strategy is to keep an eye on your goals and then let your intuition suggest you the right choice.

Section 5

A comprehensible picture of the outset, appropriate and well-timed monitoring and relevance, accurate information gathering, will yield an accurate outcome

Step 1: Defining the Purpose of the Evaluation

One common misconception about evaluation is that it is something done after a project has been implemented, by an outside group who then judges whether or not the project was effective. While this scenario is often true in practice, it is not the only or necessarily the right way to conduct an evaluation. A good evaluation plan should be developed before a project is implemented, should be designed in conjunction with the project investigators, and should ensure that the evaluation serves two broad purposes. First, evaluation activities should provide information to guide the redesign and improvement of the intervention. Second, evaluation activities should provide information that can be used by both the project investigators and other interested parties to decide whether or not they should implement the intervention on a wider scale.

These two purposes correspond with two broad types of evaluation: formative and summative. The goal of formative evaluation is to improve an intervention or project. The goal of summative evaluation is to judge the effectiveness, efficiency, or cost of an intervention.

The purpose of formative evaluation is to provide information to the project team so that their intervention can be modified and improved. It focuses on whether the intervention is being carried out as planned. Formative evaluation activities can include materials and software development and beta testing, focus groups to assess students' attitudes and responses to aspects of intervention design and materials, and experimental studies to determine the effect of specific design characteristics on students' mastery and retention of concepts and skills. While some of these activities also yield data related to intervention effectiveness, their primary goal is to provide information for intervention improvement.

The purpose of summative evaluation is to produce information that can be used to make decisions about the overall success of the intervention. There are three specific and sequential types of summative evaluation questions that should be addressed for any intervention:

  • Intervention Efficacy - Efficacy evaluation asks the question: "Under research (ideal) conditions, can the intervention lead to the desired outcomes?" Efficacy questions assess whether an intervention is associated with improvements in students' performance when implemented in small groups, by teachers who receive special instruction, with motivational support provided for student participation.

  • Intervention Effectiveness - Effectiveness evaluation asks the question: "When implemented on a wider scale, under conditions similar to those that occur in regular teaching, does the intervention continue to lead to desired outcomes?" Effectiveness questions usually assess whether the intervention continues to be associated with improvements in students' performance when carried out under normal classroom conditions, by teachers who have not received special instruction, and without additional motivational support for participation.

  • Intervention Costs - Both developmental and recurrent costs associated with the intervention must be assessed. Issues here relate to the time, support, and effort required to implement the intervention both by individual faculty members and by departments. Generally, investigators should try to determine how much investment would be needed for another program to implement the intervention.

The use of a staggered approach to the summative evaluation should allow one to identify and address operational difficulties in the use of the intervention. Too often, summative evaluations simply measure efficacy. If an intervention is to go beyond being a simple "pilot project," the investigators must also evaluate intervention effectiveness and cost.

Action Steps for Investigators:

  • Decide on the purpose(s) of the project evaluation.
  • Determine what primary questions need to be answered.
  • Determine in what order these questions should be answered.
  • Decide who will be the primary audience(s) for the evaluation results.
  • Determine how the results will be used.

Step 2: Clarify Project Objectives

A prerequisite for evaluation is the development of a project plan with measurable objectives that are logically related to one another and the goals and interventions defined in the project proposal. All objectives should specify what is to be done, by when. There are three types of objectives: impact, outcome, and process. Impact objectives should focus on changes in the long-term performance of students that are expected to result from project activities and should correspond to the priority goal of the project as stated in the project proposal. Outcome objectives should focus on changes in knowledge, attitudes, behaviors, or availability of educational programs or supports that result from project activities, and should be directly related to the intervention's target population. Process objectives specify the actions needed for project implementation and should correspond to the various activities (development of written or computer software materials, peer education sessions, placements in internships, training of educators, etc.) necessary to achieve the intended outcomes and impact.

Action Steps for Investigators:

  • Review and, if necessary, revise existing project objectives.
  • Ensure that appropriate impact, outcome, and process objectives have been specified.

Step 3: Create a Model of Change

A model of change clarifies underlying assumptions about how the proposed intervention will lead to the expected outcomes and goals of the intervention. While this concept sounds like a simple one, it is often the weakest element of an evaluation plan. The development of a clear and correct model of change is the most critical step in the development of a sound evaluation plan.

What is a model of change? A model of change refers to the specific set of relationships that one believes connects the intervention to the achievement of the impact objectives of the project. The model should specify how the proposed interventions will lead to these goals.

A simple model of change for this project might begin with the assumption that multimedia methods are more effective for presenting knowledge than didactic lectures. Because multimedia methods are more effective, students will learn more, retain more, and, therefore, will have a higher probability of passing the course.

If this model reflects the assumptions underlying the proposed intervention and how it leads to the achievement of the project goals, investigators should try to assess each of the proposed links in the model of change. For example, do the students who use the multimedia system learn more than those taught by the traditional lecture system? Does the use of multimedia result in a higher percentage of students passing the course? Does it result in students liking chemistry more?

It could be that the intervention does increase learning (let's say that the students develop better conceptual knowledge of chemistry due to the use of interactive simulations, as reflected in their laboratory worksheets) but this knowledge may not lead to a higher percentage of students passing the course. This "failure to pass" could occur because the course grade is based on a curve or because the exams do not tap this increased conceptual understanding. Alternatively, students could learn more, perform better in the course, but still choose to drop out of chemistry because -- even when passing the course -- they do not like chemistry more. Or it could be that they like chemistry so much after participating in the multimedia intervention that they decide to take more chemistry.

The important point here is that the set of relationships theorized to exist between the intervention and the goals of the project must be clearly defined. To the extent possible, each of the defined relationships should then be measured as part of the evaluation plan, allowing you to determine why and how the project either succeeded in reaching its goals or failed to do so. The more specific you are in developing your model of change, the more useful the information generated by the evaluation will be.

Of course, few projects have sufficient resources to assess all assumptions. They must choose which of the relationships exist in their model to test. These choices should be based on:

  1. What can be measured well, given available resources
  2. Where problems can be anticipated
  3. Where investigators have control and can improve the intervention or project based on the results

Action Steps for Investigators:

Develop a model of change for the project, making it as specific and complete as possible. Review each of the assumptions (lines) in the model. Using the criteria presented above, identify the priority assumptions to be addressed through the project evaluation.

Step 4: Select Criteria and Indicators

Once measurable objectives and priority assumptions have been defined, investigators can make plans for evaluation based on specific criteria and indicators. Criteria are technical standards that can be used as the basis for making judgments about the quality of a curriculum, intervention, or another project component. For example, criteria for a curriculum might include whether it has measurable learning objectives or the quality of support and training provided to educators in the use of participatory learning methods.

Indicators are quantified measurements that can be repeated over time to track progress toward the achievement of objectives. Most indicators are expressed as rates or proportions and include a numeric numerator and denominator. Selection of indicators should be based on their:

  • Validity - the extent to which the indicator is a true and accurate measure of the phenomenon under study
  • Reliability - the extent to which indicator measurements are consistent and dependable across applications or overtime
  • Sensitivity - the likelihood of change within a reasonable period and as a result of successful project implementation without undue influence of extra-project factors
  • Utility - the ability to produce data that can be easily interpreted
  • Usefulness - specifically in guiding project change

In addition, only those indicators that can be measured with available project resources should be selected.

Step 5: Identify Data Sources and Define How Often Indicators Will Be Measured

Once criteria and indicators have been defined, investigators must identify the best sources of data and determine how often these variables will be measured. Reports and records collected routinely by project or institutional personnel, such as class attendance reports, graduation records, SAT scores, or student performance on examinations, can be important sources of evaluation data if they are of sufficient accuracy. Where such data do not exist or are not accurate, special studies or audits may be necessary. Investigators should also explore whether data collected for other purposes or projects may be available and appropriate for use in evaluating activities. For example, student course evaluations conducted for other educational purposes may provide an opportunity to obtain data specific to project activities.

Investigators must also define how often indicators will be measured. Considerations include:

  • What resources are needed to collect data for the indicator (e.g., data from student examinations can be collected more frequently than data from a special assessment of student skills)

  • When indicator data will be needed to guide project decision making (e.g., data should be collected, analyzed, and prepared for review before rather than after a project review exercise)

  • When meaningful changes in indicator levels can be expected given project activities

Step 6: Design Evaluation Research

The key to a good evaluation plan is the design of the study or studies to answer the evaluation questions. There are many possible research designs and plans. Your objective should be to maximize the reliability and validity of your evaluation results.

Reliability refers to the consistency or dependability of the data. The idea is simple: if the same test, questionnaire, or evaluation procedure is used a second time, or by a different research team, would it obtain the same results? If so, the test is reliable. In any evaluation or research design, the data collected are useful only if the measures used are reliable.

Validity refers to the extent to which the questions or procedures measure what they claim to measure. Another way to say this is that valid data are not only reliable but are also true and accurate. Measures used to collect data about a variable in your evaluation study must be both reliable and valid if the overall evaluation is to produce useful data.

Investigators should select a research design that controls as many threats to validity as possible. Of course, few studies can control completely for all threats, and investigators are often constrained by cost, availability of subjects, or other factors that preclude the optimal study design. However, the key is to systematically assess possible designs based on the various threats to validity, and select the design that is most valid given other constraints. Below we will give a brief overview of three of the major threats to validity in evaluation research designs, followed by an overview of qualitative and quantitative research methods.

Common Threats to Validity

Selection: A common threat to validity occurs when the people selected for the experimental group are different from those in the comparison group. For example, suppose you want to determine if tutorial sessions will improve course performance. In seeking to answer this question, you ask for volunteers from the class to participate in the tutorial sessions and then compare their performance in the course to the students who did not volunteer. The question, however, is whether the two groups of students are alike in all characteristics except for participation in the tutorial session. Perhaps better students (or more motivated students) volunteer for the extra work. Any differences in course performance may be due simply to the selection bias introduced through asking students to volunteer rather than randomly assigning students to the tutorial group. Investigators need to ensure that the students in all the groups being compared on a course or test performance are equal in all the characteristics that may affect performance (e.g., knowledge, skills, motivation). If this is not possible, some differences may be able to be addressed through statistical analysis.

Mortality refers to the differential loss of students from intervention as compared to the usual treatment group, resulting in differences between the students in the groups at the time of testing. For example, one could assign students to one of two groups: one group spends an extra hour each week solving problems while the other has small, one-hour discussion groups weekly. It could be that more students would drop out of the problem group than the discussion group, especially those with less motivation. If this occurs, one could end up with differences between the students in the two groups that could be the source of any differences in performance.

Hawthorne Effect, while not normally described as a threat to validity, is one issue that evaluators of educational interventions must consider. The Hawthorne Effect can best be explained by relating it to the concept of placebo effects. As we all know, it has been shown that when people believe they are being given effective treatment, whether, for a psychological or physical illness, they tend to improve even if the treatment is simply a sugar pill. People begin to feel or perform better because of increased motivation or self-confidence. The Hawthorne Effect is similar. It states that when one introduces a new method of performing a task and participants know that it is part of an effort to improve performance, there is a temporary gain in performance, even if the new method is no better (or even worse) than the old way of doing things. The explanation for this phenomenon is that when people are told a new system will improve their performance and when they know they are being watched or evaluated, they tend to increase their effort and motivation, which results in better performance. However, this increase in performance is only temporary. The Hawthorne Effect can seriously affect the validity of evaluation results, particularly if you are evaluating a new educational intervention. Sound evaluation plans include study designs that control for these threats to validity. In the following section, we will provide an overview of various research designs.

Research Designs

Qualitative Research: Some evaluation questions address issues that are not easily quantified. In particularly informative research, investigators may be interested in faculty or student attitudes about an intervention or approach, their ideas about how it could be improved, or their explanations about why they performed in a particular way. Qualitative research can help investigators understand these issues.

Qualitative research must be undertaken with the same level of methodological rigor as quantitative research. Indeed, for investigators without previous experience, we recommend that they identify an experienced qualitative researcher to provide technical assistance.

Qualitative methods that may be particularly useful include the following:

  • Qualitative interviews are a method in which the interviewer poses a series of open-ended questions to the respondent, asks follow-up questions to clarify their answers, and carefully records what is said. This method has been referred to as a "conversation with a purpose." In qualitative interviews, the interviewers must be carefully trained to establish rapport with the respondent, keep the interview focused on the specific topics of interest, and maintain accurate and complete records.
  • Focus groups are meetings of 6-10 people in which participants are carefully guided through a predefined set of issues and encouraged to discuss and present alternative points of view. This method is particularly useful in obtaining information about the acceptability or usefulness of specific educational techniques. Focus groups should be conducted in a relaxed atmosphere, among groups of participants who are comfortable sharing their opinions. Often one investigator leads the group, while other records (either by hand or audiotape) the interactions. Conducting a successful focus group is difficult and takes considerable training.
  • Systematic observation can be useful if investigators have questions about how materials or methods are used. Observations can be completely unstructured, or can be guided by the use of a checklist, with or without a verbal protocol in which subjects are asked to "think out loud." Results of systematic observation are sometimes useful as the basis for individual or focus group interviews.

Quantitative Research: There are three broad classes of quantitative research designs: non-experimental designs, experimental designs, and quasi-experimental designs. In describing these designs, we will use the notation developed by Campbell and Stanley (1963). The notation is explained below.

Non-experimental designs are generally used only when one is trying to collect descriptive data. These types of studies are characterized by the absence of a control or comparison group. There are two commonly used non-experimental designs in evaluation research: (1) the posttest-only design and (2) the pretest-posttest design.

There are several key points to note about both of these non-experimental designs. First, while both can be used for descriptive purposes, neither can be used to claim that the intervention is better than any other intervention. The Pretest-Posttest Design does allow one to judge the amount of gain made by the treatment group, but you cannot attribute this change to your intervention. It could be that time or other events that occurred during the intervening period caused the gains between the first and second tests. Because of these problems, non-experimental designs are the designs of the last choice.

Quasi-Experimental Designs: Quasi-experimental designs are studies that follow the basic structure of a true experiment, but without controlling for differences in subject selection. That is, the subjects are not randomly assigned to conditions. Two classic quasi-experimental designs will be discussed: time-series design and nonequivalent control group design.

Time-series designs are similar to non-experimental pretest-posttest designs, with the added advantage of repeated measurements before and after the intervention. The primary advantage of this type of design is that it gives trend information. One can compare the changes between O3 and O4 to all other pairs of observations. If the intervention is the cause of the change (not the time or changes in the subject's performance due to aging or learning in other courses) the changes between O3 and O4 should be greater than those between any other pair of observations.

The nonequivalent control group design has the advantage of providing a direct comparison group. It controls for changes that may be due to time or other causes but does not control for subject differences. However, if the two groups are equivalent on the pretest scores, the threat to the validity of the study due to differences in subjects is somewhat reduced.

Experimental Designs: the key distinction that separates experimental designs from non- or quasi-experimental designs is the random assignment of subjects into the intervention groups. Random assignment helps ensure that subjects in the groups will be equal before the intervention is introduced. This leveling helps eliminate bias due to subject selection. We will briefly describe two of the more common experimental designs: the pretest-posttest control group design and the multiple intervention design.

The pretest-posttest control group design has several advantages over the designs presented earlier. First, it provides for random assignment of students into groups, helping eliminate the threat of selection bias. Second, it provides a clear comparison group and uses a pre-and post-test design, allowing one to measure not only differential gains between groups but also absolute gains in skills and knowledge. The only weakness in this design is that it does not control for the Hawthorne Effect.

The multiple intervention design has the advantage of controlling for threats to validity due to selection and the Hawthorne Effect. In addition, if interventions are based on a theoretical understanding of how the intervention produces change, isolating individual or groups of causal variables, it can be used to identify the specific causes of any changes in learning due to the intervention.

In the multiple intervention design, the intervention groups can be systematically designed to vary on how much of the total intervention is received by students in each group. For example, if one is interested in determining the effectiveness of a multimedia tutoring system in teaching chemistry, there may be many aspects of the system that one believes will aid learning (e.g., additional simulations, structured drills). One could design the study so that one group receives the simulations only, one group the structured drill only, one group to both structured drill and simulation, and the fourth group extra chemistry problems to work. By comparing the four groups on how much chemistry was learned (e.g., exam and course grades) one could determine the relative effectiveness of drill alone, the simulations alone, the combined effect, and with the problem set group, the effect of additional time spent studying without the use of the multimedia system. Using random assignment of subjects to the groups, one has controlled for selection bias, most other threats to validity, and the Hawthorne Effect.

This use of a multiple intervention group design provides the best test of the effectiveness of the proposed intervention, yielding data on both process and outcome variables. It does so by isolating the effects of specific variables in the overall intervention. This type of design can be combined with a pretest-posttest design, yielding even more data regarding the initial equivalence of groups. The use of multiple intervention groups allows one to test the independent effects of variables in a complex intervention and provides an easy way to control for the Hawthorne Effect and time-on-task that other designs do not. This method is the superior study design for the evaluation of most projects, although it is often difficult for investigators to implement.

Action Steps for Investigators:

  • Design evaluation research studies for key questions.
  • Carry out studies and report the results.
  • Where appropriate, use the results to improve project interventions or operations.

Step 7: Monitor and Evaluate

Once project investigators have developed an evaluation plan, the next challenge is to carry it out successfully. This is harder than it may seem. All too often, evaluation is forgotten amid the day-to-day pressures of project implementation and becomes important only when reports are due or publications are being prepared. Under these conditions, the essential formative role of evaluation as a means of improving project interventions and operations is lost.

Strategies that can help ensure that evaluation activities are an integral part of the project include:

  • Establish a routine information system for the project, including inputs (time, resources), outputs (activities completed, student contact hours), and outcomes (student course grades, interim results of evaluation activities). Once established, a member of the project staff should be held responsible for keeping the information system up to date.
  • Include evaluation activities in the project budget. Too often, the costs associated with carrying out project evaluation are underestimated or even omitted in the original project budget. Once you have developed an evaluation plan, estimate the costs of implementing it and include these in your budget.
  • Hold regularly scheduled monitoring and evaluation meetings for project staff. All those who work on a project should be familiar with the project objectives and how they will be evaluated. Each individual should bear responsibility for documenting progress toward process objectives, and for reviewing the evaluation results as the basis for project improvement.
  • Encourage review and revision of the evaluation plan. Although the establishment of project objectives and associated indicators are a fundamental part of project planning, they may change as project interventions develop and are refined. Do not hesitate to revise aspects of the evaluation plan -- to strengthen the research designs, select alternative indicators if the original ones are not sufficiently sensitive to project achievements, or incorporate the results of formative research.

Action Step for Investigators:

  • Implement your plan for project evaluation. Ensure that you have a working management information system for the project as well as sufficient money, time, and people to carry out planned evaluation activities and that regular meetings are held to review and use the evaluation results.

Step 8: Use and Report Evaluation Results

All evaluation is wasted unless the results are used to improve project operations or interventions. This essential step, however, is frequently overlooked. For projects, all reports should include not only evaluation results and reports of progress, but detailed explanations of how those results were used to reinforce, refine, or modify project activities.

Evaluation activities should be fully incorporated into the project management process (design, implement, evaluate, redesign...). Too often, insufficient time and resources are available for the redesign stage. You should schedule project activities to allow time for reviewing evaluation results and modifying project design after results become available and before the next iteration of the intervention begins.

The purpose of educational science is to improve educational practice. Therefore, you must use evaluation results not only to inform your project and primary audiences but also that you disseminate them to a wider audience. Dissemination can and should include the publication of your evaluation results in peer-reviewed journals, or presentations at professional conferences. In addition, several less formal avenues can be used to share preliminary results and experiences.

Action Steps for Investigators:

  • Ensure that your project timeline includes adequate time for the interim analysis and review of evaluation results and modification of project interventions.
  • Publish and present your evaluation findings in appropriate journals and at relevant conferences.

Conclusions

Evaluation is essential to improve the quality and effectiveness of projects designed to improve communities plagued by violence. The first step in the development of appropriate evaluation activities is to incorporate an evaluation strategy into the project planning process.

So where do you start? Most currently funded projects do not have the personnel or financial resources to design and implement comprehensive evaluations of their projects. A practical approach to this dilemma is to proceed incrementally; beginning with what is possible now and gradually increasing evaluation activities as the project develops. Projects should strive to evaluate a few components well, rather than several poorly or not at all.

Investigators may want to focus their short-term evaluation efforts on the most important process and outcome objectives of their projects. From an evaluation perspective, a focus on implementation and immediate outcomes are advantageous because relatively inexpensive and straightforward methods for valid assessments of student performance exist and have been used successfully to evaluate other educational interventions.

A limited set of priority indicators useful to project managers should be identified in an overall evaluation plan. The plan should specify the data sources and how often indicators will be measured. Priority indicators will vary from project to project, based on their goals and specific objectives. Project directors should systematically select the indicators appropriate for their project as a part of the planning process. In addition, State, Regional, or National managers may have uniform indicators to be collected by all projects; you should discuss the selection of indicators with your Project Officer to ensure that any key indicators are adequately covered by your evaluation plan.

For evaluation to lead to improvements in educational programs, it must be clearly defined as a part of the project activities. Investigators can increase the yield from their project evaluation activities by working collaboratively with other disciplines and with the national staff of the project to define appropriate guidelines, evaluation questions, and methods. A coordinated approach will conserve resources and allow comparisons among various approaches.