Site logo

  • How to Write Evaluation Reports: Purpose, Structure, Content, Challenges, Tips, and Examples
  • Learning Center

Evaluation report

This article explores how to write effective evaluation reports, covering their purpose, structure, content, and common challenges. It provides tips for presenting evaluation findings effectively and using evaluation reports to improve programs and policies. Examples of well-written evaluation reports and templates are also included.

Table of Contents

What is an Evaluation Report?

What is the purpose of an evaluation report, importance of evaluation reports in program management, structure of evaluation report, best practices for writing an evaluation report, common challenges in writing an evaluation report, tips for presenting evaluation findings effectively, using evaluation reports to improve programs and policies, example of evaluation report templates, conclusion: making evaluation reports work for you.

An evaluatio n report is a document that presents the findings, conclusions, and recommendations of an evaluation, which is a systematic and objective assessment of the performance, impact, and effectiveness of a program, project, policy, or intervention. The report typically includes a description of the evaluation’s purpose, scope, methodology, and data sources, as well as an analysis of the evaluation findings and conclusions, and specific recommendations for program or project improvement.

Evaluation reports can help to build capacity for monitoring and evaluation within organizations and communities, by promoting a culture of learning and continuous improvement. By providing a structured approach to evaluation and reporting, evaluation reports can help to ensure that evaluations are conducted consistently and rigorously, and that the results are communicated effectively to stakeholders.

Evaluation reports may be read by a wide variety of audiences, including persons working in government agencies, staff members working for donors and partners, students and community organisations, and development professionals working on projects or programmes that are comparable to the ones evaluated.

Related: Difference Between Evaluation Report and M&E Reports .

The purpose of an evaluation report is to provide stakeholders with a comprehensive and objective assessment of a program or project’s performance, achievements, and challenges. The report serves as a tool for decision-making, as it provides evidence-based information on the program or project’s strengths and weaknesses, and recommendations for improvement.

The main objectives of an evaluation report are:

  • Accountability: To assess whether the program or project has met its objectives and delivered the intended results, and to hold stakeholders accountable for their actions and decisions.
  • Learning : To identify the key lessons learned from the program or project, including best practices, challenges, and opportunities for improvement, and to apply these lessons to future programs or projects.
  • Improvement : To provide recommendations for program or project improvement based on the evaluation findings and conclusions, and to support evidence-based decision-making.
  • Communication : To communicate the evaluation findings and conclusions to stakeholders , including program staff, funders, policymakers, and the general public, and to promote transparency and stakeholder engagement.

An evaluation report should be clear, concise, and well-organized, and should provide stakeholders with a balanced and objective assessment of the program or project’s performance. The report should also be timely, with recommendations that are actionable and relevant to the current context. Overall, the purpose of an evaluation report is to promote accountability, learning, and improvement in program and project design and implementation.

Evaluation reports play a critical role in program management by providing valuable information about program effectiveness and efficiency. They offer insights into the extent to which programs have achieved their objectives, as well as identifying areas for improvement.

Evaluation reports help program managers and stakeholders to make informed decisions about program design, implementation, and funding. They provide evidence-based information that can be used to improve program outcomes and address challenges.

Moreover, evaluation reports are essential in demonstrating program accountability and transparency to funders, policymakers, and other stakeholders. They serve as a record of program activities and outcomes, allowing stakeholders to assess the program’s impact and sustainability.

In short, evaluation reports are a vital tool for program managers and evaluators. They provide a comprehensive picture of program performance, including strengths, weaknesses, and areas for improvement. By utilizing evaluation reports, program managers can make informed decisions to improve program outcomes and ensure that their programs are effective, efficient, and sustainable over time.

definition of a evaluation report

The structure of an evaluation report can vary depending on the requirements and preferences of the stakeholders, but typically it includes the following sections:

  • Executive Summary : A brief summary of the evaluation findings, conclusions, and recommendations.
  • Introduction: An overview of the evaluation context, scope, purpose, and methodology.
  • Background: A summary of the programme or initiative that is being assessed, including its goals, activities, and intended audience(s).
  • Evaluation Questions : A list of the evaluation questions that guided the data collection and analysis.
  • Methodology: A description of the data collection methods used in the evaluation, including the sampling strategy, data sources, and data analysis techniques.
  • Findings: A presentation of the evaluation findings, organized according to the evaluation questions.
  • Conclusions : A summary of the main evaluation findings and conclusions, including an assessment of the program or project’s effectiveness, efficiency, and sustainability.
  • Recommendations : A list of specific recommendations for program or project improvements based on the evaluation findings and conclusions.
  • Lessons Learned : A discussion of the key lessons learned from the evaluation that could be applied to similar programs or projects in the future.
  • Limitations : A discussion of the limitations of the evaluation, including any challenges or constraints encountered during the data collection and analysis.
  • References: A list of references cited in the evaluation report.
  • Appendices : Additional information, such as detailed data tables, graphs, or maps, that support the evaluation findings and conclusions.

The structure of the evaluation report should be clear, logical, and easy to follow, with headings and subheadings used to organize the content and facilitate navigation.

In addition, the presentation of data may be made more engaging and understandable by the use of visual aids such as graphs and charts.

Writing an effective evaluation report requires careful planning and attention to detail. Here are some best practices to consider when writing an evaluation report:

Begin by establishing the report’s purpose, objectives, and target audience. A clear understanding of these elements will help guide the report’s structure and content.

Use clear and concise language throughout the report. Avoid jargon and technical terms that may be difficult for readers to understand.

Use evidence-based findings to support your conclusions and recommendations. Ensure that the findings are clearly presented using data tables, graphs, and charts.

Provide context for the evaluation by including a brief summary of the program being evaluated, its objectives, and intended impact. This will help readers understand the report’s purpose and the findings.

Include limitations and caveats in the report to provide a balanced assessment of the program’s effectiveness. Acknowledge any data limitations or other factors that may have influenced the evaluation’s results.

Organize the report in a logical manner, using headings and subheadings to break up the content. This will make the report easier to read and understand.

Ensure that the report is well-structured and easy to navigate. Use a clear and consistent formatting style throughout the report.

Finally, use the report to make actionable recommendations that will help improve program effectiveness and efficiency. Be specific about the steps that should be taken and the resources required to implement the recommendations.

By following these best practices, you can write an evaluation report that is clear, concise, and actionable, helping program managers and stakeholders to make informed decisions that improve program outcomes.

Catch HR’s eye instantly?

  • Resume Review
  • Resume Writing
  • Resume Optimization

Premier global development resume service since 2012

Stand Out with a Pro Resume

Writing an evaluation report can be a challenging task, even for experienced evaluators. Here are some common challenges that evaluators may encounter when writing an evaluation report:

  • Data limitations: One of the biggest challenges in writing an evaluation report is dealing with data limitations. Evaluators may find that the data they collected is incomplete, inaccurate, or difficult to interpret, making it challenging to draw meaningful conclusions.
  • Stakeholder disagreements: Another common challenge is stakeholder disagreements over the evaluation’s findings and recommendations. Stakeholders may have different opinions about the program’s effectiveness or the best course of action to improve program outcomes.
  • Technical writing skills: Evaluators may struggle with technical writing skills, which are essential for presenting complex evaluation findings in a clear and concise manner. Writing skills are particularly important when presenting statistical data or other technical information.
  • Time constraints: Evaluators may face time constraints when writing evaluation reports, particularly if the report is needed quickly or the evaluation involved a large amount of data collection and analysis.
  • Communication barriers: Evaluators may encounter communication barriers when working with stakeholders who speak different languages or have different cultural backgrounds. Effective communication is essential for ensuring that the evaluation’s findings are understood and acted upon.

By being aware of these common challenges, evaluators can take steps to address them and produce evaluation reports that are clear, accurate, and actionable. This may involve developing data collection and analysis plans that account for potential data limitations, engaging stakeholders early in the evaluation process to build consensus, and investing time in developing technical writing skills.

Presenting evaluation findings effectively is essential for ensuring that program managers and stakeholders understand the evaluation’s purpose, objectives, and conclusions. Here are some tips for presenting evaluation findings effectively:

  • Know your audience: Before presenting evaluation findings, ensure that you have a clear understanding of your audience’s background, interests, and expertise. This will help you tailor your presentation to their needs and interests.
  • Use visuals: Visual aids such as graphs, charts, and tables can help convey evaluation findings more effectively than written reports. Use visuals to highlight key data points and trends.
  • Be concise: Keep your presentation concise and to the point. Focus on the key findings and conclusions, and avoid getting bogged down in technical details.
  • Tell a story: Use the evaluation findings to tell a story about the program’s impact and effectiveness. This can help engage stakeholders and make the findings more memorable.
  • Provide context: Provide context for the evaluation findings by explaining the program’s objectives and intended impact. This will help stakeholders understand the significance of the findings.
  • Use plain language: Use plain language that is easily understandable by your target audience. Avoid jargon and technical terms that may confuse or alienate stakeholders.
  • Engage stakeholders: Engage stakeholders in the presentation by asking for their input and feedback. This can help build consensus and ensure that the evaluation findings are acted upon.

By following these tips, you can present evaluation findings in a way that engages stakeholders, highlights key findings, and ensures that the evaluation’s conclusions are acted upon to improve program outcomes.

Evaluation reports are crucial tools for program managers and policymakers to assess program effectiveness and make informed decisions about program design, implementation, and funding. By analyzing data collected during the evaluation process, evaluation reports provide evidence-based information that can be used to improve program outcomes and impact.

One of the primary ways that evaluation reports can be used to improve programs and policies is by identifying program strengths and weaknesses. By assessing program effectiveness and efficiency, evaluation reports can help identify areas where programs are succeeding and areas where improvements are needed. This information can inform program redesign and improvement efforts, leading to better program outcomes and impact.

Evaluation reports can also be used to make data-driven decisions about program design, implementation, and funding. By providing decision-makers with data-driven information, evaluation reports can help ensure that programs are designed and implemented in a way that maximizes their impact and effectiveness. This information can also be used to allocate resources more effectively, directing funding towards programs that are most effective and efficient.

Another way that evaluation reports can be used to improve programs and policies is by disseminating best practices in program design and implementation. By sharing information about what works and what doesn’t work, evaluation reports can help program managers and policymakers make informed decisions about program design and implementation, leading to better outcomes and impact.

Finally, evaluation reports can inform policy development and improvement efforts by providing evidence about the effectiveness and impact of existing policies. This information can be used to make data-driven decisions about policy development and improvement efforts, ensuring that policies are designed and implemented in a way that maximizes their impact and effectiveness.

In summary, evaluation reports are critical tools for improving programs and policies. By providing evidence-based information about program effectiveness and efficiency, evaluation reports can help program managers and policymakers make informed decisions, allocate resources more effectively, disseminate best practices, and inform policy development and improvement efforts.

There are many different templates available for creating evaluation reports. Here are some examples of template evaluation reports that can be used as a starting point for creating your own report:

  • The National Science Foundation Evaluation Report Template – This template provides a structure for evaluating research projects funded by the National Science Foundation. It includes sections on project background, research questions, evaluation methodology, data analysis, and conclusions and recommendations.
  • The CDC Program Evaluation Template – This template, created by the Centers for Disease Control and Prevention, provides a framework for evaluating public health programs. It includes sections on program description, evaluation questions, data sources, data analysis, and conclusions and recommendations.
  • The World Bank Evaluation Report Template – This template, created by the World Bank, provides a structure for evaluating development projects. It includes sections on project background, evaluation methodology, data analysis, findings and conclusions, and recommendations.
  • The European Commission Evaluation Report Template – This template provides a structure for evaluating European Union projects and programs. It includes sections on project description, evaluation objectives, evaluation methodology, findings, conclusions, and recommendations.
  • The UNICEF Evaluation Report Template – This template provides a framework for evaluating UNICEF programs and projects. It includes sections on program description, evaluation questions, evaluation methodology, findings, conclusions, and recommendations.

These templates provide a structure for creating evaluation reports that are well-organized and easy to read. They can be customized to meet the specific needs of your program or project and help ensure that your evaluation report is comprehensive and includes all of the necessary components.

  • World Health Organisations Reports
  • Checkl ist for Assessing USAID Evaluation Reports

In conclusion, evaluation reports are essential tools for program managers and policymakers to assess program effectiveness and make informed decisions about program design, implementation, and funding. By analyzing data collected during the evaluation process, evaluation reports provide evidence-based information that can be used to improve program outcomes and impact.

To make evaluation reports work for you, it is important to plan ahead and establish clear objectives and target audiences. This will help guide the report’s structure and content and ensure that the report is tailored to the needs of its intended audience.

When writing an evaluation report, it is important to use clear and concise language, provide evidence-based findings, and offer actionable recommendations that can be used to improve program outcomes. Including context for the evaluation findings and acknowledging limitations and caveats will provide a balanced assessment of the program’s effectiveness and help build trust with stakeholders.

Presenting evaluation findings effectively requires knowing your audience, using visuals, being concise, telling a story, providing context, using plain language, and engaging stakeholders. By following these tips, you can present evaluation findings in a way that engages stakeholders, highlights key findings, and ensures that the evaluation’s conclusions are acted upon to improve program outcomes.

Finally, using evaluation reports to improve programs and policies requires identifying program strengths and weaknesses, making data-driven decisions, disseminating best practices, allocating resources effectively, and informing policy development and improvement efforts. By using evaluation reports in these ways, program managers and policymakers can ensure that their programs are effective, efficient, and sustainable over time.

' data-src=

Well understanding, the description of the general evaluation of report are clear with good arrangement and it help students to learn and make practices

' data-src=

Patrick Kapuot

Thankyou for very much for such detail information. Very comprehensively said.

' data-src=

hailemichael

very good explanation, thanks

Leave a Comment Cancel Reply

Your email address will not be published.

How strong is my Resume?

Only 2% of resumes land interviews.

Land a better, higher-paying career

definition of a evaluation report

Recommended Jobs

Request for information – collecting information on potential partners for local works evaluation.

  • Washington, USA

Principal Field Monitors

Technical expert (health, wash, nutrition, education, child protection, hiv/aids, supplies), survey expert, data analyst, team leader, usaid-bha performance evaluation consultant.

  • International Rescue Committee

Manager II, Institutional Support Program Implementation

Senior human resources associate, energy and environment analyst – usaid bureau for latin america and the caribbean, intern- international project and proposal support, ispi, deputy chief of party, senior accounting associate, monitoring & evaluation technical specialist (indicators & data).

  • Washington, DC, USA
  • United States Department of Treasury, Office of Technical Assistance

Project Assistant

Services you might be interested in, useful guides ....

How to Create a Strong Resume

Monitoring And Evaluation Specialist Resume

Resume Length for the International Development Sector

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

U.S. flag

An official website of the United States government

Here’s how you know

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Evaluation Home

The Federal Evaluation Toolkit BETA

Evaluation 101.

What is evaluation? How can it help me do my job better? Evaluation 101 provides resources to help you answer those questions and more. You will learn about program evaluation and why it is needed, along with some helpful frameworks that place evaluation in the broader evidence context. Other resources provide helpful overviews of specific types of evaluation you may encounter or be considering, including implementation, outcome, and impact evaluations, and rapid cycle approaches.

What is Evaluation?

Heard the term "evaluation," but are still not quite sure what that means? These resources help you answer the question, "what is evaluation?," and learn more about how evaluation fits into a broader evidence-building framework.

What is Program Evaluation?: A Beginners Guide

Program evaluation uses systematic data collection to help us understand whether programs, policies, or organizations are effective. This guide explains how program evaluation can contribute to improving program services. It provides a high-level, easy-to-read overview of program evaluation from start (planning and evaluation design) to finish (dissemination), and includes links to additional resources.

Types of Evaluation

What's the difference between an impact evaluation and an implementation evaluation? What does each type of evaluation tell us? Use these resources to learn more about the different types of evaluation, what they are, how they are used, and what types of evaluation questions they answer.

Common Framework for Research and Evaluation The Administration for Children & Families Common Framework for Research and Evaluation (OPRE Report #2016-14). Office of Planning, Research, and Evaluation, U.S. Department of Health and Human Services. https://www.acf.hhs.gov/sites/default/files/documents/opre/acf_common_framework_for_research_and_evaluation_v02_a.pdf" aria-label="Info for Common Framework for Research and Evaluation">

Building evidence is not one-size-fits all, and different questions require different methods and approaches. The Administration for Children & Families Common Framework for Research and Evaluation describes, in detail, six different types of research and evaluation approaches – foundational descriptive studies, exploratory descriptive studies, design and development studies, efficacy studies, effectiveness studies, and scale-up studies – and can help you understand which type of evaluation might be most useful for you and your information needs.

Formative Evaluation Toolkit Formative evaluation toolkit: A step-by-step guide and resources for evaluating program implementation and early outcomes . Washington, DC: Children’s Bureau, Administration for Children and Families, U.S. Department of Health and Human Services." aria-label="Info for Formative Evaluation Toolkit">

Formative evaluation can help determine whether an intervention or program is being implemented as intended and producing the expected outputs and short-term outcomes. This toolkit outlines the steps involved in conducting a formative evaluation and includes multiple planning tools, references, and a glossary. Check out the overview to learn more about how this resource can help you.

Introduction to Randomized Evaluations

Randomized evaluations, also known as randomized controlled trials (RCTs), are one of the most rigorous evaluation methods used to conduct impact evaluations to determine the extent to which your program, policy, or initiative caused the outcomes you see. They use random assignment of people/organizations/communities affected by the program or policy to rule out other factors that might have caused the changes your program or policy was designed to achieve. This in-depth resource introduces randomized evaluations in a non-technical way, provides examples of RCTs in practice, describes when RCTs might be the right approach, and offers a thorough FAQ about RCTs.

Rapid Cycle Evaluation at a Glance Rapid Cycle Evaluation at a Glance (OPRE #2020-152). Office of Planning, Research, and Evaluation, U.S. Department of Health and Human Services. https://www.acf.hhs.gov/opre/report/rapid-cycle-evaluation-glance" aria-label="Info for Rapid Cycle Evaluation at a Glance">

Rapid Cycle Evaluation (RCE) can be used to efficiently assess implementation and inform program improvement. This brief provides an introduction to RCE, describing what it is, how it compares to other methods, when and how to use it, and includes more in-depth resources. Use this brief to help you figure out whether RCE makes sense for your program.

Evaluation.gov

An official website of the Federal Government

definition of a evaluation report

Academic Evaluations

In our daily lives, we are continually evaluating objects, people, and ideas in our immediate environments. We pass judgments in conversation, while reading, while shopping, while eating, and while watching television or movies, often being unaware that we are doing so. Evaluation is an equally fundamental writing process, and writing assignments frequently ask us to make and defend value judgments.

Evaluation is an important step in almost any writing process, since we are constantly making value judgments as we write. When we write an "academic evaluation," however, this type of value judgment is the focus of our writing.

A Definition of Evaluation

Kate Kiefer, English Professor Like most specific assignments that teachers give, writing evaluations mirrors what happens so often in our day-to-day lives. Every day we decide whether the temperature is cold enough to need a light or heavy jacket; whether we're willing to spend money on a good book or a good movie; whether the prices at the grocery store tell us to keep shopping at the same place or somewhere else for a better value. Academic tasks rely on evaluation just as often. Is a source reliable? Does an argument convince? Is the article worth reading? So writing evaluation helps students make this often unconscious daily task more overt and prepares them to examine ideas, facts, arguments, and so on more critically.

To evaluate is to assess or appraise. Evaluation is the process of examining a subject and rating it based on its important features. We determine how much or how little we value something, arriving at our judgment on the basis of criteria that we can define.

We evaluate when we write primarily because it is almost impossible to avoid doing so. If right now you were asked to write for five minutes on any subject and were asked to keep your writing completely value-free, you would probably find such an assignment difficult. Readers come to evaluative writing in part because they seek the opinions of other people for one reason or another.

Uses for Evaluation

Consider a time recently when you decided to watch a movie. There were at least two kinds of evaluation available to you through the media: the rating system and critical reviews.

Newspapers and magazines, radio and TV programs all provide critical evaluations for their readers and viewers. Many movie-goers consult more than one media reviewer to adjust for bias. Most movie-goers also consider the rating system, especially if they are deciding to take children to a movie. In addition, most people will also ask for recommendations from friends who have already seen the movie.

Whether professional or personal, judgments like these are based on the process of evaluation. The terminology associated with the elements of this process--criteria, evidence, and judgment--might seem alien to you, but you have undoubtedly used these elements almost every time you have expressed an opinion on something.

Types of Written Evaluation

Quite a few of the assignments writers are given at the university and in the workplace involve the process of evaluation.

One type of written evaluation that most people are familiar with is the review. Reviewers will attend performances, events, or places (like restaurants, movies, or concerts), basing their evaluations on their observations. Reviewers typically use a particular set of criteria they establish for themselves, and their reviews most often appear in newspapers and magazines.

Critical Writing

Reviews are a type of critical writing, but there are other types of critical writing which focus on objects (like works of art or literature) rather than on events and performances. Literary criticism, for instance, is a way of establishing the worth or literary merit of a text on the basis of certain established criteria. When we write about literary texts, we do so using one of many critical "lenses," viewing the text as it addresses matters like form, culture, historical context, gender, and class (to name a few). Deciding whether a text is "good" or "bad" is a matter of establishing which "lens" you are viewing that text through, and using the appropriate set of criteria to do so. For example, we might say that a poem by an obscure Nineteenth Century African American poet is not "good" or "useful" in terms of formal characteristics like rhyme, meter, or diction, but we might judge that same text as "good" or "useful" in terms of the way it addresses cultural and political issues historically.

Response Essays

One very common type of academic writing is the response essay. In many different disciplines, we are asked to respond to something that we read or observe. Some types of response, like the interpretive response, simply ask us to explain a text. However, there are other types of response (like agree/disagree and analytical response) which demand that we make some sort of judgment based on careful consideration of the text, object, or event in question.

Problem Solving Essays

In writing assignments which focus on issues, policies, or phenomena, we are often asked to propose possible solutions for identifiable problems. This type of essay requires evaluation on two levels. First of all, it demands that we use evaluation in order to determine that there is a legitimate problem. And secondly, it demands that we take more than one policy or solution into consideration to determine which will be the most feasible, viable, or effective one, given that problem.

Arguing Essays

Written argument is a type of evaluative writing, particularly when it focuses on a claim of value (like "The death penalty is cruel and ineffective") or policy claim (like "Oakland's Ebonics program is an effective way of addressing standard English deficiencies among African American students in public schools"). In written argument, we advance a claim like one of the above, then support this claim with solid reasons and evidence.

Process Analysis

In scientific or investigative writing, in which experiments are conducted and processes or phenomena are observed or studied, evaluation plays a part in the writer's discussion of findings. Often, these findings need to be both interpreted and analyzed by way of criteria established by the writer.

Source Evaluation

Although not a form of written evaluation in and of itself, source evaluation is a process that is involved in many other types of academic writing, like argument, investigative and scientific writing, and research papers. When we conduct research, we quickly learn that not every source is a good source and that we need to be selective about the quality of the evidence we transplant into our own writing.

Relevance to the Topic

When you conduct research, you naturally look for sources that are relevant to your topic. However, writers also often fall prey to the tendency to accept sources that are just relevant enough . For example, if you were writing an essay on Internet censorship, you might find that your research yielded quite a few sources on music censorship, art censorship, or censorship in general. Though these sources could possibly be marginally useful in an essay on Internet censorship, you will probably want to find more directly relevant sources to serve a more central role in your essay.

Perspective on the Topic

Another point to consider is that even though you want sources relevant to your topic, you might not necessarily want an exclusive collection of sources which agree with your own perspective on that topic. For example, if you are writing an essay on Internet censorship from an anti-censorship perspective, you will want to include in your research sources which also address the pro-censorship side. In this way, your essay will be able to fully address perspectives other than (and sometimes in opposition to) your own.

Credibility

One of the questions you want to ask yourself when you consider using a source is "How credible will my audience consider this source to be?" You will want to ask this question not only of the source itself (the book, journal, magazine, newspaper, home page, etc.) but also of the author. To use an extreme example, for most academic writing assignments you would probably want to steer clear of using a source like the National Enquirer or like your eight year old brother, even though we could imagine certain writing situations in which such sources would be entirely appropriate. The key to determining the credibility of a source/author is to decide not only whether you think the source is reliable, but also whether your audience will find it so, given the purpose of your writing.

Currency of Publication

Unless you are doing research with an historical emphasis, you will generally want to choose sources which have been published recently. Sometimes research and statistics maintain their authority for a very long time, but the more common trend in most fields is that the more recent a study is, the more comprehensive and accurate it is.

Accessibility

When sorting through research, it is best to select sources that are readable and accessible both for you and for your intended audience. If a piece of writing is laden with incomprehensible jargon and incoherent structure or style, you will want to think twice about directing it toward an audience unfamiliar with that type of jargon, structure, or style. In short, it is a good rule of thumb to avoid using any source which you yourself do not understand and are not able to interpret for your audience.

Quality of Writing

When choosing sources, consider the quality of writing in the texts themselves. It is possible to paraphrase from sources that are sloppily written, but quoting from such a source would serve only to diminish your own credibility in the eyes of your audience.

Understanding of Biases

Few are sources are truly objective or unbiased . Trying to eliminate bias from your sources will be nearly impossible, but all writers can try to understand and recognize the biases of their sources. For instance, if you were doing a comparative study of 1/2-ton pickup trucks on the market, you might consult the Ford home page. However, you would also need to be aware that this source would have some very definite biases. Likewise, it would not be unreasonable to use an article from Catholic World in an anti-abortion argument, but you would want to understand how your audience would be likely to view that source. Although there is no fail-proof way to determine the bias of a particular journal or newspaper, you can normally sleuth this out by looking at the language in the article itself or in the surrounding articles.

Use of Research

In evaluating a source, you will need to examine the sources that it in turn uses. Looking at the research used by the author of your source, what biases can you recognize? What are the quantity and quality of evidence and statistics included? How reliable and readable do the excerpts cited seem to be?

Considering Purpose and Audience

We typically think of "values" as being personal matters. But in our writing, as in other areas of our lives, values often become matters of public and political concern. Therefore, it is important when we evaluate to consider why we are making judgments on a subject (purpose) and who we hope to affect with our judgments (audience).

Purposes of Evaluation

Your purpose in written evaluation is not only to express your opinion or judgment about a subject, but also to convince, persuade, or otherwise influence an audience by way of that judgment. In this way, evaluation is a type of argument, in which you as a writer are attempting consciously to have an effect on your readers' ways of thinking or acting. If, for example, you are writing an evaluation in which you make a judgment that Mountain Bike A is a better buy than Mountain Bike B, you are doing more than expressing your approval of the merits of Bike A; you are attempting to convince your audience that Bike A is the better buy and, ultimately, to persuade them to buy Bike A rather than Bike B.

Effects of Audience

Kate Kiefer, English Professor When we evaluate for ourselves, we don't usually take the time to articulate criteria and detail evidence. Our thought processes work fast enough that we often seem to make split-second decisions. Even when we spend time thinking over a decision--like which expensive toy (car, stereo, skis) to buy--we don't often lay out the criteria explicitly. We can't take that shortcut when we write to other folks, though. If we want readers to accept our judgment, then we need to be clear about the criteria we use and the evidence that helps us determine value for each criterion. After all, why should I agree with you to eat at the Outback Steak House if you care only about cost but I care about taste and safe food handling? To write an effective evaluation, you need to figure out what your readers care about and then match your criteria to their concerns. Similarly, you can overwhelm readers with too much detail when they don't have the background knowledge to care about that level of detail. Or you can ignore the expertise of your readers (at your peril) and not give enough detail. Then, as a writer, you come across as condescending, or worse. So targeting an audience is really key to successful evaluation.

In written evaluation, it is important to keep in mind not only your own system of value, but also that of your audience. Writers do not evaluate in a vacuum. Giving some thought to the audience you are attempting to influence will help you to determine what criteria are important to them and what evidence they will require in order to be convinced or persuaded by your evaluative argument. In order to evaluate effectively, it is important that you consider what motivates and concerns your audience.

Criteria and Audience Considerations

The first step in deciding which criteria will be effective in your evaluation is determining which criteria your audience considers important. For example, if you are writing a review of a Mexican restaurant to an audience comprised mainly of senior citizens from the midwest, it is unlikely that "large portions" and "fiery green chile" will be the criteria most important to them. They might be more concerned, rather, with "quality of service" or "availability of heart smart menu items." Trying to anticipate and address your audience's values is an indispensable step in writing a persuasive evaluative argument. Your next step in suiting your criteria to your audience is to determine how you will explain and/or defend not only your judgments, but the criteria supporting them as well. For example, if you are arguing that a Mexican restaurant is excellent because, among other reasons, the texture of the food is appealing, you might need to explain to your audience why texture is a significant criterion in evaluating Mexican food.

Evidence and Audience Considerations

The amount and type of evidence you use to support your judgments will depend largely on the demands of your audience. Common sense tells us that the more oppositional an audience is, the more evidence will be needed to convince them of the validity a judgment. For instance, if you were writing a favorable review of La Cocina on the basis of their fiery green chile, you might not need to use a great deal of evidence for an audience of people who like spicy food but have not tried any of the Mexican restaurants in town. However, if you are addressing an audience who is deeply devoted to the green chile at Manuel's, you will need to provide a fair amount of solid evidence in order to persuade them to try another restaurant.

Parts of an Evaluation

When we evaluate, we make an overall value claim about a subject, using criteria to make judgments based on evidence. Often, we also make use of comparison and contrast as strategies for determining the relative worth of the subject we are considering. This section examines these parts of an evaluation and shows how each functions in a successful evaluation.

Overall Claim

An overall claim or judgment is an evaluator's final decision about worth. When we evaluate, we make a general statement about the worth of objects, goods, services, or solutions to problems.

An overall claim or judgment in an evaluation can be as simple as "See this movie!" or "Brand X is a better buy than the name brand." It can also be complex, particularly when the evaluator recognizes certain conditions that affect the judgment: If citizens of our community want to improve air and water quality and are willing to forego 300 additional jobs, then we should not approve the new plant Acme is hoping to build here.

Qualifications

An overall claim or judgment usually requires qualification so that it seems balanced. If judgments are weighted too much to one side, they will sometimes mar the credibility of your argument. If your overall judgment is wholly positive, your evaluation will wind up sounding like propaganda or advertisement. If it is wholly negative, you might present yourself as overly critical, unfair, or undiplomatic. An example of a qualified claim or judgment might be the following: Although La Cocina is not without its faults, it is the best Mexican restaurant in town. Qualifications are almost always positive additions to evaluative arguments, but writers must learn not to overuse them. If you make too many qualifications, your audience will be unable to determine your final position on your subject, and you will appear to be "waffling."

Example Text

Creating more parking lots is a possible solution to the horrendous traffic congestion in Taiwan's major cities. When a new building permit is issued, each building must include a certain number of spaces for parking. However, new construction takes time, and results will be seen only as new buildings are erected. This solution alone is inadequate for most of Taiwan's problem areas, which need a solution whose results will be noticed immediately.

Comment Notice how this sentence at the end of the paragraph seems to be a formal "thesis" or "claim" which might drive the rest of the essay. Based on this claim, we would assume that the remainder of the essay will deal with the reasons why the proposed policy along is "inadequate," and will address other possible solutions.

Supporting Judgments

In academic evaluations, the overall claim or judgment is backed up by smaller, more detailed judgments about aspects of a subject being evaluated. Supporting judgments function in the same way that "reasons" function in most arguments. They provide structure and justification for a more general claim. For example, if your overall claim or judgment in your evaluation is

"Although La Cocina is not without its faults, it is the best Mexican restaurant in town,"

one supporting judgment might be

"La Cocina's green chile is superb."

This judgment would be based on criteria you have established, and it would be supported by evidence.

Providing more parking spaces near buildings is not the only act necessary to solve Taiwan's parking problems. A combination of more parking spaces, increased fines, and lowered traffic volume may be necessary to eliminate the nightmare of driving in the cities. In fact, until laws are enforced and fines increased, no number of new parking spaces will impact the congestion seen in downtown areas.

Comment There are arguably three supporting judgments being made here, as three possible solutions are being suggested to rectify this problem of parking in Taiwan. If we were reading these supporting judgments at the beginning of an essay, we would expect the essay to discuss them in depth, pointing out evidence that these proposed solutions would be effective.

When we write evaluations, we consciously adopt certain standards of measurement, or criteria .

Criteria can be concrete standards, like size or speed, or can be abstract, like practicality. When we write evaluations in an academic context, we typically avoid using criteria that are wholly personal, and rely instead on those that are less "subjective" and more likely to be shared by the majority of the audience we are addressing. Choosing appropriate criteria often involves careful consideration of audience demands, values, and concerns.

As an evaluator, you will sometimes discover that you will need to explain and/or defend not only your judgments, but also the criteria informing those judgments. For example, if you are arguing that a Mexican restaurant is excellent because (among other reasons) the texture of the food is appealing, you might need to explain to your audience why texture is a significant criterion in evaluating Mexican food.

Types of Criteria

If you are evaluating a concrete canoe for an engineering class, you will use concrete criteria such as float time, cost of materials, hydrodynamic design, and so on. If you are evaluating the suitability of a textbook for a history class, you will probably rely on more abstract criteria such as readability, length, and controversial vs. mainstream interpretation of history.

In evaluation, we often rely on concrete , measurable standards according to which subjects (usually objects) may be evaluated. For example, cars may be evaluated according to the criteria of size, speed, or cost.

Many academic evaluations, however, don't focus on objects that we can measure in terms of size, speed, or cost. Rather, they look at somewhat more abstract concepts (problems and solutions often), which we might measure in terms of "effectiveness," "feasibility," or other abstract criteria. When writing this kind of evaluation, it is vital to be as clear as possible when articulating, defining, and using your criteria, since not all readers are likely to understand and agree with these criteria as readily as they would understand and agree with concrete criteria.

Related Information: Abstract Criteria

Abstract criteria are not easily measurable, and they are usually less self-evident, more in need of definition, than concrete criteria. Even though criteria may be abstract, they should not be imprecise. Always state your criteria as clearly and precisely as possible. "Feasibility" is one example of an abstract criterion that a writer might use to evaluate a solution to a problem. Feasibility is the degree of likelihood of success of something like a plan of action or a solution to a problem. "Capability of being implemented" is a way to look at feasibility in terms of solutions to problems. The relative ease with which a solution would be adopted is sometimes a way to look at feasibility. The following example mentions directly the criteria it is using (the words in italics). Fire prevention should be the major consideration of a family building a home. By using concrete, the risk of fire is significantly decreased. But that is not all that concrete provides. It is affordable , suitable for all climates , and helps reduce deforestation . Since all of these factors are important, concrete should be demanded more than it is, and it should certainly be used more than wood for homebuilding.

Related Information: Concrete Criteria

Concrete criteria are measurable standards which most people are likely to understand and (usually) to agree with. For example, a person might make use of criteria like "size," "speed," and "cost" when buying a car.

If size is your main criterion, and something with a larger size will receive a more favorable evaluation.

Perhaps the only quality that you desire in a car is low initial cost. You don't need to take into account anything else. In this case, you can put judgments on these three cars in the local used car lot:

Because the Nissan has the lowest initial price, it receives the most favorable judgment. The evidence is found on the price tag. Each car is compared by way of a single criterion: cost.

Using Clear and Well-defined Criteria

When we evaluate informally (passing judgments during the course of conversation, for instance), we typically assume that our criteria are self-evident and require no explanation. However, in written evaluation, it is often necessary that we clarify and define our criteria in order to make a persuasive evaluative argument.

Criteria That Are Too Vague or Personal

Although we frequently find ourselves needing to use abstract criteria like "feasibility" or "effectiveness," we also must avoid using criteria that are overly vague or personal and difficult to support with evidence. As evaluators, we must steer clear of criteria that are matters of taste, belief, or personal preference. For example, the "best" lamp might simply be the one that you think looks prettiest in your home. If you depend on a criterion like "pretty in my home," and neglect to use more common, shared criteria like "brightness," "cost," and "weight," you are probably relying on a criterion that is too specific to your own personal preferences. To make "pretty in my home" an effective criterion, you would need to explain what "pretty in my home" means and how it might relate to other people's value systems. (For example: "Lamp A is attractive because it is an unoffensive style and color that would be appropriate for many people's decorating tastes.")

Using Criteria Based on the Appropriate "Class" of Subjects

When you make judgments, it is important that you use criteria that are appropriate to the type of object, person, policy, etc. that you are examining. If you are evaluating Steven Spielburg's film, Schindler's List , for instance, it is unfair to criticize it because it isn't a knee-slapper. Because "Schindler's List" is a drama and not a comedy, using the criterion of "humor" is inappropriate.

Weighing Criteria

Once you have established criteria for your evaluation of a subject, it is necessary to decide which of these criteria are most important. For example, if you are evaluating a Mexican restaurant and you have arrived at several criteria (variety of items on the menu, spiciness of the food, size of the portions, decor, and service), you need to decide which of these criteria are most critical to your evaluation. If the size of the portions is good, but the service is bad, can you give the restaurant a good rating? What about if the decor is attractive, but the food is bland? Once you have placed your criteria in a hierarchy of importance, it is much easier to make decisions like these.

When we evaluate, we must consider the audience we hope to influence with our judgments. This is particularly true when we decide which criteria are informing (and should inform) these judgments.

After establishing some criteria for your evaluation, it is important to ask yourself whether or not your audience is likely to accept those criteria. It is crucial that they do accept the criteria if, in turn, you expect them to accept the supporting judgments and overall claim or judgment built on them.

Related Information: Explaining and Defending Criteria

In deciding which criteria will be effective in your evaluation is determining which criteria your audience considers important. For example, if you are writing a review of a Mexican restaurant to an audience comprised mainly of senior citizens from the midwest, it is unlikely that "large portions" and "fiery green chile" will be the criteria most important to them. They might be more concerned, rather, with "quality of service" or "availability of heart smart menu items." Trying to anticipate and address your audience's values is an indispensable step in writing a persuasive evaluative argument.

Related Information: Understanding Audience Criteria

How Background Experience Influences Criteria

Laura Thomas - Composition Lecturer Your background experience influences the criteria that you use in evaluation. If you know a lot about something, you will have a good idea of what criteria should govern your judgments. On the other hand, it's hard if you don't know enough about what you're judging. Sometimes you have to research first in order to come up with useful criteria. For example, I recently went shopping for a new pair of skis for the first time in fifteen years. When I began shopping, I realized that I didn't even know what questions to ask anymore. The last time I had bought skis, you judged them according to whether they had a foam core or a wood core. But I had no idea what the important considerations were anymore.

Evidence consists of the specifics you use to reach your conclusion or judgment. For example, if you judge that "La Cocina's green chile is superb" on the basis of the criterion, "Good green chile is so fiery that you can barely eat it," you might offer evidence like the following:

"I drank an entire pitcher of water on my own during the course of the meal."
"Though my friend wouldn't admit that the chile was challenging for him, I saw beads of sweat form on his brow."

Related Information: Example Text

In the following paragraph, evidence appears in italics. Note that the reference to the New York Times backs up the evidence offered in the previous sentence:

Since killer whales have small lymphatic systems, they catch infections more easily when held captive ( Obee 23 ). The orca from the movie "Free Willy," Keiko, developed a skin disorder because the water he was living in was not cold enough. This infection was a result of the combination of tank conditions and the animal's immune system, according to a New York Times article .

Types of Evidence

Evidence for academic evaluations is usually of two types: concrete detail and analytic detail. Analytic detail comes from critical thinking about abstract elements of the thing being evaluated. It will also include quotations from experts. Concrete detail comes from sense perceptions and measurements--facts about color, speed, size, texture, smell, taste, and so on. Concrete details are more likely to support concrete criteria (as opposed to abstract criteria) used in judging objects. Analytic detail will more often support abstract criteria (as opposed to concrete criteria), like the criterion "feasibility," discussed in the section on criteria. Analytic detail also appears most often in academic evaluations of solutions to problems, although such solutions can also sometimes be evaluated according to concrete criteria.

What Kinds of Evidence Work

Good evidence ranges from personal experience to interviews with experts to published sources. The kind of evidence that works best for you will depend on your audience and often on the writing assignment you have been given.

Evidence and the Writing Assignment

When you choose evidence to support the judgments you are making in an evaluation, it will be important to consider what type of evaluation you are being asked to do. If, for instance, you are being asked to review a play you have attended, your evidence will most likely consist primarily of your own observations. However, if your assignment asks you to compare and contrast two potential national health care policies (toward deciding which is the better one), your evidence will need to be more statistical, more dependent on reputable sources, and more directed toward possible effects or outcomes of your judgment.

Comparison and Contrast

Comparison and contrast is the process of positioning an item or concept being evaluated among other like items or concepts. We are all familiar with this technique as it's used in the marketing of products: soft drink "taste tests," comparisons of laundry detergent effectiveness, and the like. It is a way of determining the value of something in relation to comparable things. For example, if you have made the judgment that "La Cocina's green chile is superb" and you have offered evidence of the spiciness and the flavor of the chile, you might also use comparison by giving your audience a scale on which to base judgment: "La Cocina's chile is even more fiery and flavorful than Manuel's, which is by no means a walk in the park."

In this case, the writer compares limestone with wood to show that limestone is a better building material. Although this comparison could be developed much more, it still begins to point out the relative merits of limestone. Concrete is a feasible substitute for wood as a building material. Concrete comes from a rock called limestone. Limestone is found all over the United States. By using limestone instead of wood, the dependence on dwindling forest reserves would decrease. There are more sedimentary rocks than there are forests left in this country, and they are more evenly distributed. For this reason, it is quite possible to switch from wood to concrete as the primary building material for residential construction.

Determining Relative Worth

Comparing and contrasting rarely means placing the item or concept being evaluated in relation to another item or concept that is obviously grossly inferior. For instance, if you are attempting to demonstrate the value of a Cannondale mountain bike, it would be foolish to compare it with a Huffy. However, it would be useful to compare it with a Klein, arguably a similar bicycle. In this type of maneuver, you are not comparing good with bad; rather, you are deciding which bike is better and which bike is worse. In order to determine relative worth in this way, you will need to be very careful in defining the criteria you are using to make the comparison.

Using Comparison and Contrast Effectively

In order to make comparison and contrast function well in evaluation, it is necessary to be attentive to: 1) focusing on the item or concept under consideration and 2) the use of evidence in comparison and contrast. When using comparison and contrast, writers must remember that they are using comparable items or concepts only as a way of demonstrating the worth of the main item or concept under consideration. It is easy to lose focus when using this technique, because of the temptation to evaluate two (or more) items or concepts rather than just the one under consideration. It is important to remember that judgments made on the basis of comparison and contrast need to be supported with evidence. It is not enough to assert that "La Cocina's chile is even more fiery and flavorful than Manuel's." It will be necessary to support this judgment with evidence, showing in what ways La Cocina's chile is more flavorful: "Manuel's chile relies heavily on a tomato base, giving it an Italian flavor. La Cocina follows a more traditional recipe which uses little tomato and instead flavors the chile with shredded pork, a dash of vinegar, and a bit of red chile to give it a piquant taste."

The Process of Writing an Evaluation

A variety of writing assignments call for evaluation. Bearing in mind the various approaches that might be demanded by those particular assignments, this section offers some general strategies for formulating a written evaluation.

Choosing a Topic for Evaluation

Sometimes your topic for evaluation will be dictated by the writing assignment you have been given. Other times, though, you will be required to choose your own topic. Common sense tells you that it is best to choose something about which you already have a base knowledge. For instance, if you are a skier, you might want to evaluate a particular model of skis. In addition, it is best to choose something that is tangible, observable, and/or researchable. For example, if you chose a topic like "methods of sustainable management of forests," you would know that there would be research to support your evaluation. Likewise, if you chose to evaluate a film like Pulp Fiction , you could rent the video and watch it several times in order to get the evidence you needed. However, you would have fewer options if you were to choose an abstract concept like "loyalty" or "faith." When evaluating, it is usually best to steer clear of abstractions like these as much as possible.

Brainstorming Possible Judgments

Once you have chosen a topic, you might begin your evaluation by thinking about what you already know about the topic. In doing this, you will be coming up with possible judgments to include in your evaluation. Begin with a tentative overall judgment or claim. Then decide what supporting judgments you might make to back that claim. Keep in mind that your judgments will likely change as you collect evidence for your evaluation.

Determining a Tentative Overall Judgment

Start by making an overall judgment on the topic in question, based on what you already know. For instance, if you were writing an evaluation of sustainable management practices in forestry, your tentative overall judgment might be: "Sustainable management is a viable way of dealing with deforestation in old growth forests."

Brainstorming Possible Supporting Judgments

With a tentative overall judgment in mind, you can begin to brainstorm judgments (or reasons) that could support your overall judgment by asking the question, "Why?" For example, asking "Why?" of the tentative overall judgment "Sustainable management is a viable way of dealing with deforestation in old growth forests" might yield the following supporting judgments:

  • Sustainable management allows for continued support of the logging industry.
  • It eliminates much unnecessary waste.
  • It is much better for the environment than unrestricted, traditional forestry methods.
  • It is less expensive than these traditional methods.

Anticipating Changes to Your Judgments After Collecting Evidence

When brainstorming possible judgments this early in the writing process, it is necessary to keep an open mind as you enter into the stage in which you collect evidence. Once you have done observations, analysis, or research, you might find that you are unable to advance your tentative overall judgment. Or you might find that some of the supporting judgments you came up with are not true or are not supportable. Your findings might also point you toward other judgments you can make in addition to the ones you are already making.

Defining Criteria

To prepare to organize and write your evaluation, it is important to clearly define the criteria you are using to make your judgments. These criteria govern the direction of the evaluation and provide structure and justification for the judgments you make.

Looking at the Criteria Informing Your Judgments (Working Backwards)

We often work backwards from the judgments we make, discovering what criteria we are using on the basis of what our judgments look like. For instance, our tentative judgments about sustainable management practices are as follows:

If we were to analyze these judgments, asking ourselves why we made them, we would see that we used the following criteria: wellbeing of the logging industry, conservation of resources, wellbeing of the environment, and cost.

Thinking of Additional Criteria

Once you have identified the criteria informing your initial judgments, you will want to determine what other criteria should be included in your evaluation. For example, in addition to the criteria you've already come up with (wellbeing of the logging industry, conservation of resources, wellbeing of the environment, and cost), you might include the criterion of preservation of the old growth forests.

Comparing Your Criteria with Those of Your Audience

In deciding which criteria are most important to include in your evaluation, it is necessary to consider the criteria your audience is likely to find important. Let's say we are directing our evaluation of sustainable management methods toward an audience of loggers. If we look at our list of criteria--wellbeing of the logging industry, conservation of resources, wellbeing of the environment, cost, and preservation of the old growth forests--we might decide that wellbeing of the logging industry and cost are the criteria most important to loggers. At this point, we would also want to identify additional criteria the audience might expect us to address: perhaps feasibility, labor requirements, and efficiency.

Deciding Which Criteria Are Most Important

Once you have developed a long list of possible criteria for judging your subject (in this case, sustainable management methods), you will need to narrow the list, since it is impractical and ineffective to use of all possible criteria in your essay. To decide which criteria to address, determine which are least dispensable, both to you and to your audience. Your own criteria were: wellbeing of the logging industry, conservation of resources, wellbeing of the environment, cost, and preservation of the old growth forests. Those you anticipated for your audience were: feasibility, labor requirements, and efficiency. In the written evaluation, you might choose to address those criteria most important to your audience, with a couple of your own included. For example, your list of indispensable criteria might look like this: wellbeing of the logging industry, cost, labor requirements, efficiency, conservation of resources, and preservation of the old growth forests.

Criteria and Assumptions

Stephen Reid, English Professor Warrants (to use a term from argumentation) come on the scene when we ask why a given criterion should be used or should be acceptable in evaluating the particular text, product, or performance in question. When we ask WHY a particular criterion should be important (let's say, strong performance in an automobile engine, quickly moving plot in a murder mystery, outgoing personality in a teacher), we are getting at the assumptions (i.e., the warrant) behind why the data is relevant to the claim of value we are about to make. Strong performance in an automobile engine might be a positive criterion in an urban, industrialized environment, where traveling at highway speeds on American interstates is important. But we might disagree about whether strong performance (accompanied by lower mileage) might be important in a rural European environment where gas costs are several dollars a litre. Similarly, an outgoing personality for a teacher might be an important standard of judgment or criterion in a teacher-centered classroom, but we could imagine another kind of decentered class where interpersonal skills are more important than teacher personality. By QUESTIONING the validity and appropriateness of a given criterion in a particular situation, we are probing for the ASSUMPTIONS or WARRANTS we are making in using that criterion in that particular situation. Thus, criteria are important, but it is often equally important for writers to discuss the assumptions that they are making in choosing the major criteria in their evaluations.

Collecting Evidence

Once you have established the central criteria you will use in our evaluation, you will investigate your subject in terms of these criteria. In order to investigate the subject of sustainable management methods, you would more than likely have to research whether these methods stand up to the criteria you have established: wellbeing of the logging industry, cost, labor requirements, time efficiency, conservation of resources, and preservation of the old growth forests. However, library research is only one of the techniques evaluators use. Depending on the type of evaluation being made, the evaluator might use such methods as observation, field research, and analysis.

Thinking About What You Already Know

The best place to start looking for evidence is with the knowledge you already possess. To do this, you might try brainstorming, clustering, or freewriting ideas.

Library Research

When you are evaluating policies, issues, or products, you will usually need to conduct library research to find the evidence your evaluation requires. It is always a good idea to check journals, databases, and bibliographies relevant to your subject when you begin research. It is also helpful to speak with a reference librarian about how to get started.

Observation

When you are asked to evaluate a performance, event, place, object, or person, one of the best methods available is simple observation. What makes observation not so simple is the need to focus on criteria you have developed ahead of time. If, for instance, you are reviewing a student production of Hamlet , you will want to review your list of criteria (perhaps quality of acting, costumes, faithfulness to the text, set design, lighting, and length of time before intermission) before attending the play. During or after the play, you will want to take as many notes as possible, keeping these criteria in mind.

Field Research

To expand your evaluation beyond your personal perspective or the perspective of your sources, you might conduct your own field research . Typical field research techniques include interviewing, taking a survey, administering a questionnaire, and conducting an experiment. These methods can help you support your judgment and can sometimes help you determine whether or not your judgment is valid.

When you are asked to evaluate a text, analysis is often the technique you will use in collecting evidence. If you are analyzing an argument, you might use the Toulmin Method. Other texts might not require such a structured analysis but might be better addressed by more general critical reading strategies.

Applying Criteria

After developing a list of indispensable criteria, you will need to "test" the subject according to these criteria. At this point, it will probably be necessary to collect evidence (through research, analysis, or observation) to determine, for example, whether sustainable management methods would hold up to the criteria you have established: wellbeing of the logging industry, cost, labor requirements, efficiency, conservation of resources, and preservation of the old growth forests. One way of recording the results of this "test" is by putting your notes in a three-column log.

Organizing the Evaluation

One of the best ways to organize your information in preparation for writing is to construct an informal outline of sorts. Outlines might be arranged according to criteria, comparison and contrast, chronological order, or causal analysis. They also might follow what Robert K. Miller and Suzanne S. Webb refer to in their book, Motives for Writing (2nd ed.) as "the pattern of classical oration for evaluations" (286). In addition to deciding on a general structure for your evaluation, it will be necessary to determine the most appropriate placement for your overall claim or judgment.

Placement of the Overall Claim or Judgment

Writers can state their final position at the beginning or the end of an essay. The same is true of the overall claim or judgment in a written evaluation.

When you place your overall claim or judgment at the end of your written evaluation, you are able to build up to it and to demonstrate how your evaluative argument (evidence, explanation of criteria, etc.) has led to that judgment.

Writers of academic evaluations normally don't need to keep readers in suspense about their judgments. By stating the overall claim or judgment early in the paper, writers help readers both to see the structure of the essay and to accept the evidence as convincing proof of the judgment. (Writers of evaluations should remember, of course, that there is no rule against stating the overall claim or judgment at both the beginning and the end of the essay.)

Organization by Criteria

The following is an example from Stephen Reid's The Prentice Hall Guide for College Writers (4th ed.), showing how a writer might arrange an evaluation according to criteria:

Introductory paragraphs: information about the restaurant (location, hours, prices), general description of Chinese restaurants today, and overall claim : The Hunan Dynasty is reliable, a good value, and versatile.
Criterion # 1/Judgment: Good restaurants should have an attractive setting and atmosphere/Hunan Dynasty is attractive.
Criterion # 2/Judgment: Good restaurants should give strong priority to service/ Hunan Dynasty has, despite an occasional glitch, expert service.
Criterion # 3/Judgment: Restaurants that serve modestly priced food should have quality main dishes/ Main dishes at Hunan Dynasty are generally good but not often memorable. (Note: The most important criterion--the quality of the main dishes--is saved for last.)
Concluding paragraphs: Hunan Dynasty is a top-flight neighborhood restaurant (338).

Organization by Comparison and Contrast

Sometimes comparison and contrast is not merely a strategy used in part [italics] of an evaluation, but is the strategy governing the organization of the entire essay. The following are examples from Stephen Reid's The Prentice Hall Guide for College Writers (4th ed.), showing two ways that a writer might organize an evaluation according to comparison and contrast.

Introductory paragraph(s)

Thesis [or overall claim/judgment]: Although several friends recommended the Yakitori, we preferred the Unicorn for its more authentic atmosphere, courteous service, and well-prepared food. [Notice that the criteria are stated in this thesis.]

Authentic atmosphere: Yakitori vs. Unicorn

Courteous service: Yakitori vs. Unicorn

Well-prepared food: Yakitori vs. Unicorn

Concluding paragraph(s) (Reid 339)

The Yakitori : atmosphere, service, and food

The Unicorn : atmosphere, service, and food as compared to the Yakitori

Concluding paragraph(s) (Reid 339).

Organization by Chronological Order

Writers often follow chronological order when evaluating or reviewing events or performances. This method of organization allows the writer to evaluate portions of the event or performance in the order in which it happens.

Organization by Causal Analysis

When using analysis to evaluate places, objects, events, or policies, writers often focus on causes or effects. The following is an example from Stephen Reid's The Prentice Hall Guide for College Writers (4th ed.), showing how one writer organizes an evaluation of a Goya painting by discussing its effects on the viewer.

Criterion #1/Judgment: The iconography, or use of symbols, contributes to the powerful effect of this picture on the viewer.

Evidence : The church as a symbol of hopefulness contrasts with the cruelty of the execution. The spire on the church emphasizes for the viewer how powerless the Church is to save the victims.

Criterion #2/Judgment: The use of light contributes to the powerful effect of the picture on the viewer.

Evidence : The light casts an intense glow on the scene, and its glaring, lurid, and artificial qualities create the same effect on the viewer that modern art sometimes does.

Criterion #3/Judgment: The composition or use of formal devices contributes to the powerful effect of the picture on the viewer.

Evidence : The diagonal lines scissors the picture into spaces that give the viewer a claustrophobic feeling. The corpse is foreshortened, so that it looks as though the dead man is bidding the viewer welcome (Reid 340).

Pattern of Classical Oration for Evaluations

Robert K. Miller and Suzanne S. Webb, in their book, Motives for Writing (2nd ed.) discuss what they call "the pattern of classical oration for evaluations," which incorporates opposing evaluations as well as supporting reasons and judgments. This pattern is as follows:

Present your subject. (This discussion includes any background information, description, acknowledgement of weaknesses, and so forth.)

State your criteria. (If your criteria are controversial, be sure to justify them.)

Make your judgment. (State it as clearly and emphatically as possible.)

Give your reasons. (Be sure to present good evidence for each reason.)

Refute opposing evaluations. (Let your reader know you have given thoughtful consideration to opposing views, since such views exist.)

State your conclusion. (You may restate or summarize your judgment.) (Miller and Webb 286-7)

Example: Part of an Outline for an Evaluation

The following is a portion of an outline for an evaluation, organized by way of supporting judgments or reasons. Notice that this pattern would need to be repeated (using criteria other than the fieriness of the green chile) in order to constitute a complete evaluation proving that "Although La Cocina is not without its faults, it is the best Mexican restaurant in town."

Evaluation of La Cocina, a Mexican Restaurant

Intro Paragraph Leading to Overall Judgment: "Although La Cocina is not without its faults, it is the best Mexican restaurant in town."

Supporting Judgment: "La Cocina's green chile is superb."

Criterion used to make this judgment: "Good green chile is so fiery that you can barely eat it."

Evidence in support of this judgment: "I drank an entire pitcher of water on my own during the course of the meal" or "Though my friend wouldn't admit that the chile was challenging for him, I saw beads of sweat form on his brow."

Supporting Judgment made by way of Comparison and Contrast: "La Cocina's chile is even more fiery and flavorful than Manuel's, which is by no means a walk in the park itself."

Evidence in support of this judgment: "Manuel's chile relies heavily on a tomato base, giving it an Italian flavor. La Cocina follows a more traditional recipe which uses little tomato, and instead flavors the chile with shredded pork, a dash of vinegar, and a bit of red chile to give it a piquant taste."

Writing the Draft

If you have an outline to follow, writing a draft of a written evaluation is simple. Stephen Reid, in his Prentice Hall Guide for College Writers , recommends that writers maintain focus on both the audience they are addressing and the central criteria they want to include. Such a focus will help writers remember what their audience expects and values and what is most important in constructing an effective and persuasive evaluation.

Guidelines for Revision

In his Prentice Hall Guide for College Writers , 4th ed., Stephen Reid offers some helpful tips for revising written evaluations. These guidelines are reproduced here and grouped as follows:

Examining Criteria

Criteria are standards of value . They contain categories and judgments, as in "good fuel economy," "good reliability," or "powerful use of light and shade in painting." Some categories, such as "price," have clearly implied judgments ("low price"), but make sure that your criteria refer implicitly or explicitly to a standard of value.

Examine your criteria from your audience's point of view. Which criteria are most important in evaluating your subject? Will your readers agree that the criteria you select are indeed the most important ones? Will changing the order in which you present your criteria make your evaluation more convincing? (Reid 342)

Balancing the Evaluation

Include both positive and negative evaluations of your subject. If all of your judgments are positive, your evaluation will sound like an advertisement. If all of your judgments are negative, your readers may think you are too critical (Reid 342).

Using Evidence

Be sure to include supporting evidence for each criterion. Without any data or support, your evaluation will be just an opinion that will not persuade your reader.

If you need additional evidence to persuade your readers, [go back to the "Collecting" stage of this process] (Reid 343).

Avoiding Overgeneralization

Avoid overgeneralizing your claims. If you are evaluating only three software programs, you cannot say that Lotus 1-2-3 is the best business program around. You can say only that it is the best among the group or the best in the particular class that you measured (Reid 343).

Making Appropriate Comparisons

Unless your goal is humor or irony, compare subjects that belong in the same class. Comparing a Yugo to a BMW is absurd because they are not similar cars in terms of cost, design, or purpose (Reid 343).

Checking for Accuracy

If you are citing other people's data or quoting sources, check to make sure your summaries and data are accurate (Reid 343).

Working on Transitions, Clarity, and Style

Signal the major divisions in your evaluation to your reader using clear transitions, key words, and paragraph hooks. At the beginning of new paragraphs or sections of your essay, let your reader know where you are going.

Revise sentences for directness and clarity.

Edit your evaluation for correct spelling, appropriate word choice, punctuation, usage, and grammar (343).

Nesbitt, Laurel, Kathy Northcut, & Kate Kiefer. (1997). Academic Evaluations. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=47

Reporting in Evaluation

Cite this chapter.

definition of a evaluation report

  • Robert O. Brinkerhoff ,
  • Dale M. Brethower ,
  • Terry Hluchyj &
  • Jeri Ridings Nowakowski  

Part of the book series: Evaluation in Education and Human Services ((EEHS,volume 4))

120 Accesses

The general purpose of reporting is to communicate information to interested audiences and to help them make use of information from the evaluation. Reporting is not a static, one-time event nor is it necessarily a product, such as a written report. Rather, reporting is an ongoing process that might include oral, visual, or written communication that commences before an evaluation begins and likely continues beyond its conclusion.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Alkin, M.C., Daillak, R. & White, P. Using Evaluations: Does Evaluation Make a Difference ? Beverly Hills: Sage, 1979.

Google Scholar  

Braskamp, L.A., Brown, R.D. & Newman, D.L. Studying Evaluation Utilization Through Simulations . Unpublished paper, University of Illinois at Urbana, Champaign and University of Nebraska-Lincoln, undated.

Flesch, R. On Business Communication: How to Say What You Mean in Plain English . New York: Harper & Row, 1972.

Hawkridge, D.G., Campeau, P.L. & Trickett, P.K. Preparing Evaluation Reports: A Guide for Authors. AIR Monograph . Pittsburgh: American Institutes for Research, 6, 1970.

Kearney, C.P. & Harper, R.J. The Politics of Reporting Results. In E.R. House (ed.), School Evaluation: The Politics and Process . Berkeley: McCutchan, 1973.

Lanham, R.A. Revising Prose . New York: Scribners, 1978.

Office of Program Evaluation and Research. Handbook for Reporting and Using Test Results . Sacramento, CA: Bureau of Publication Sales, California State Department of Education.

Patton, M.Q. Utilization-Focused Evaluation . Beverly Hills: Sage, 1978.

Popham, W.J. Educational Evaluation . Englewood Cliffs, N.J.: Prentice Hall, 1975.

Smith, D.M. & Smith, N.L. Writing Effective Evaluation Reports . Portland, OR: Northwest Regional Educational Laboratory, March, 1980.

Download references

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 1983 Evaluation Center, Western Michigan

About this chapter

Brinkerhoff, R.O., Brethower, D.M., Hluchyj, T., Nowakowski, J.R. (1983). Reporting in Evaluation. In: Program Evaluation. Evaluation in Education and Human Services, vol 4. Springer, Dordrecht. https://doi.org/10.1007/978-94-011-6757-4_5

Download citation

DOI : https://doi.org/10.1007/978-94-011-6757-4_5

Publisher Name : Springer, Dordrecht

Print ISBN : 978-94-011-6759-8

Online ISBN : 978-94-011-6757-4

eBook Packages : Springer Book Archive

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Clinical Evaluation: What is a Clinical Evaluation Report (CER)?

definition of a evaluation report

Safety and efficacy of a medical device are the key capabilities that make sure the device is capable for the application for its intended use or intended purpose. With the increased stringency of regulations like the European Medical Device Regulation ( MDR ) and guidelines set by the U.S. Food and Drug Administration ( FDA ), the process of bringing a medical device to market has become more rigorous than ever.

A central activity for medical device manufacturers to demonstrate the safety and efficacy of their devices is the Clinical Evaluation which is documented in the Clinical Evaluation Report (CER). Thus, this document plays a critical role in gaining market access approval not only in Europe and the United States, since most countries have similar market access requirements or even mutual acceptance regulation for MDR or FDA conform medical devices.

Clinical Evaluation - What is it and who does it impact

Clinical evaluation involves the systematic assessment of clinical data pertaining to a medical device to verify its safety and performance. This evaluation is not a one-time event but an ongoing process throughout the device's lifecycle. It encompasses the collection, appraisal, and analysis of clinical data from various sources, including clinical trials , post-market surveillance , and scientific literature.

Clinical evaluation impacts various stakeholders within the medical device industry.

Manufacturers must conduct thorough clinical evaluations to obtain and maintain regulatory approval for their products. For manufacturers of medical devices, clinical evaluation is the core process for product design and realization result evaluation.

Regulatory bodies , such as the European Notified Bodies and FDA, rely on robust clinical evaluation data to assess the safety and effectiveness of medical devices before granting market authorization. As authorities they can oversee activities of manufacturers to provide safe and effective devices and assure public health.

Healthcare professionals can be sure the devices they want to use are performing according to their documentation and will be efficient for their intended purpose when handled accordingly.

Patients will benefit from the assurance that medical devices have undergone rigorous clinical evaluation to ensure their safety and efficacy so the health problem can be effectively served.

What is a Clinical Evaluation Report (CER)?

A Clinical Evaluation Report (CER) is a comprehensive document that summarizes the results of the clinical evaluation process for a medical device. It provides a detailed analysis of the clinical data collected, along with an assessment of the device's safety, performance, and intended use. The CER serves as a critical tool for demonstrating compliance with regulatory requirements and supporting the marketing authorization of medical devices.

How to create a Clinical Evaluation Report?

Creating a Clinical Evaluation Report involves a systematic and well-defined process, which includes considering the role of the technological state-of-the-art and the interaction with product realization processes. Risk management is a key process of clinical evaluation and can be understood as the central process to include clinical evaluation information into the product lifecycle phases. Clinical evaluation can thus be seen as the final proof of risk management evaluation results and so the demonstration of the product's safety and performance capabilities. Here are the key steps:

Data Collection : Gather relevant clinical data from various sources, including literature review, clinical trials, post-market surveillance, and public reporting sources. Ensure the data is comprehensive, up-to-date, and relevant to the device under evaluation. Take into account the technological state-of-the-art and how it is met, incorporated or exceeded by the considered device and the resulting impacts on it's safety and performance. Make references to the technological state-of-the-art and equivalent devices for the intended purpose. If clinical data are not available as literature, they need to be created with clinical investigations, and from other sources for patient and user feedback, including public databases on feedback from medical devices.

Data Appraisal : Evaluate the quality and reliability of the collected clinical data. Assess factors such as study design, patient population, endpoints, and statistical analysis to determine the strength of the evidence supporting the device's safety and performance. When using clinical data from available device studies, take into account the equivalence of those devices and the technological state-of-the-art and their impact on the interpretation of the data.

Risk Assessment : Conduct a thorough risk assessment to identify and mitigate potential hazards associated with the device's use. Evaluate factors such as device design, intended use, patient population, and clinical outcomes to assess the risk-benefit profile of the device. Consider the technological state-of-the-art and how much it is applicable in the device's design to evaluate and manage risks.

Clinical Evaluation : Analyze the collected data and risk assessment results to assess the device's safety, performance, and clinical effectiveness. Consider factors such as clinical outcomes, adverse events, patient satisfaction, and comparative effectiveness to draw conclusions about the device's clinical performance. Refer the technological state-of-the-art and the device design for benchmarking with existing solutions on the market.

Interaction with Product Realization Processes : Ensure that the clinical evaluation process is integrated with the product realization processes of the medical device company. This includes considering the technological state-of-the-art during the design and development stages, as well as incorporating feedback from clinical evaluation into future iterations of the device. The interaction between information from clinical evaluation and product realization processes should be seamless to ensure the device meets the necessary safety and performance requirements. The best means for integration of clinical evaluation information with product design and realization is risk management and a thorough comparison of the device design and outcome with the technological state-of-the-art for the intended purpose.

Document Preparation : Compile the findings of the clinical evaluation into a comprehensive Clinical Evaluation Report. Ensure the report is well-organized, clearly written, and supported by relevant data and analysis. Include details on the device's intended use, indications for use, clinical study results, risk assessment findings, and conclusions. Depending on the device classification, the CER must be updated regularly with information from post market surveillance. Post Market Surveillance reports must also demonstrate how feedback from the market was incorporated through risk management in product design or realization changes, and that resulting risks were evaluated accordingly regarding the devices safety and performance.

Review and Approval : Review the completed Clinical Evaluation Report internally to ensure accuracy, consistency, and compliance with regulatory requirements. Obtain any necessary approvals from regulatory authorities or notified bodies before submitting the report as part of the device registration or marketing application. Consider the feedback received during the product realization processes and incorporate it into the final report, if applicable.

Steps to Prepare Your Medical Device Company

Preparing a medical device company for the creation of Clinical Evaluation Reports requires careful planning and execution. Here are some essential steps to consider:

Define the intended purpose : the intended purpose or intended use of the medical device is the base for all regulatory evaluation steps. The sooner it is defined, the better work not only for clinical evaluation can be aligned accordingly. Since the scope of applicable regulations depends on the classification of the device according to its intended purpose, effort for product and process documentation and for clinical evaluation strongly vary with intended purposes of a medical device.

Establish a Regulatory Strategy : Develop a clear regulatory strategy that outlines the requirements for clinical evaluation and documentation based on the target markets and regulatory pathways for your medical devices. This strategy should include the identification and definition of clinical requirements and documenting them in a way that they can be referenced in detail. The regulatory strategy is based on the technological state-of-the-art and the intended purpose, and it can include considering the equivalence of devices on the market.

Build a Cross-functional Team : Assemble a multidisciplinary team with expertise in clinical research, regulatory affairs, quality management, and product development to oversee the clinical evaluation process and report preparation. This team should also be responsible for linking clinical risks to the corresponding clinical requirements.

Implement a digital Quality Management System ( QMS ) : Implement a robust Quality Management System that encompasses procedures and processes for conducting clinical evaluations, documenting findings, and ensuring compliance with regulatory requirements. This system should facilitate continuous monitoring and updating of clinical requirements, evidence, and risk assessments throughout the product lifecycle. To ensure efficiency information about regulatory requirements and product lifecycle should be managed as single information items that keep detailled traceability and allow automated document creation using item selections.

Stay Informed : Stay abreast of changes to regulatory requirements, guidelines, and best practices related to clinical evaluation and reporting. Attend industry conferences, workshops, and training sessions to stay informed and up-to-date on evolving regulatory landscape. This includes staying informed about the latest clinical data sources and mapping them to specific clinical requirements. Also data from market feedback for equivalent devices should be referenced.

Invest in Training and Resources : Provide training and resources to your team members to enhance their understanding of clinical evaluation principles, methodologies, and regulatory requirements. Invest in tools and technologies that facilitate data collection, analysis, and reporting for clinical evaluations. This includes utilizing automated report generation to streamline the preparation of comprehensive Clinical Evaluation Reports (CERs).

Engage with Regulatory Authorities : Establish open communication channels with regulatory authorities and notified bodies to seek guidance, clarification, and feedback on clinical evaluation requirements and report submissions. Proactively address any questions or concerns raised by regulatory authorities to expedite the approval process. This includes ensuring facilitated regulatory compliance through clear traceability between clinical requirements, evidence, and risk assessments.

By following these steps and using an item-based approach for process and product documentation / information management, medical device companies can navigate the regulatory landscape with confidence and bring safe and effective devices to market for the benefit of patients worldwide.

Cambridge Dictionary

  • Cambridge Dictionary +Plus

Meaning of evaluation in English

Your browser doesn't support HTML5 audio

  • You need a careful evaluation by an experienced doctor .
  • It is very difficult to make a detailed evaluation.
  • The researchers turned up no credible evaluations at all.
  • construction
  • impact statement
  • interpretation
  • job evaluation
  • lucubration
  • re-evaluation
  • review bomb

evaluation | Business English

Examples of evaluation, collocations with evaluation.

These are words often used in combination with evaluation .

Click on a collocation to see more examples of it.

Translations of evaluation

Get a quick, free translation!

{{randomImageQuizHook.quizId}}

Word of the Day

call centre

a large office in which a company's employees provide information to its customers, or sell or advertise its goods or services, by phone

Varied and diverse (Talking about differences, Part 1)

Varied and diverse (Talking about differences, Part 1)

definition of a evaluation report

Learn more with +Plus

  • Recent and Recommended {{#preferredDictionaries}} {{name}} {{/preferredDictionaries}}
  • Definitions Clear explanations of natural written and spoken English English Learner’s Dictionary Essential British English Essential American English
  • Grammar and thesaurus Usage explanations of natural written and spoken English Grammar Thesaurus
  • Pronunciation British and American pronunciations with audio English Pronunciation
  • English–Chinese (Simplified) Chinese (Simplified)–English
  • English–Chinese (Traditional) Chinese (Traditional)–English
  • English–Dutch Dutch–English
  • English–French French–English
  • English–German German–English
  • English–Indonesian Indonesian–English
  • English–Italian Italian–English
  • English–Japanese Japanese–English
  • English–Norwegian Norwegian–English
  • English–Polish Polish–English
  • English–Portuguese Portuguese–English
  • English–Spanish Spanish–English
  • English–Swedish Swedish–English
  • Dictionary +Plus Word Lists
  • English    Noun
  • Business    Noun
  • Collocations
  • Translations
  • All translations

To add evaluation to a word list please sign up or log in.

Add evaluation to one of your lists below, or create a new one.

{{message}}

Something went wrong.

There was a problem sending your report.

European Union Flag

  • Legislation

Understanding REACH

  • Substance Identification
  • Research and development (PPORD)
  • Data sharing disputes
  • Joint submission
  • Publishing information from dossiers
  • Revocation and invalidity of registration decisions
  • Evaluation process
  • Deadline setting in evaluation decisions
  • Compliance checks
  • Examination of testing proposals
  • Community rolling action plan
  • What happens after substance evaluation?
  • Overall progress in evaluation
  • Progress in evaluation 2018
  • Progress in evaluation 2019
  • Progress in evaluation 2020
  • Progress in evaluation 2021
  • Progress in evaluation 2022
  • Progress in evaluation 2023
  • Recommendations to registrants
  • Authorisation process
  • Candidate List obligations
  • Recommendation for the Authorisation List
  • Downstream uses covered by granted authorisations
  • Restriction process
  • ECHA’s activities on restrictions
  • Preparation of a restriction proposal
  • Consultations
  • Information on restricted substances
  • Extended safety data sheets
  • Manufacturers
  • Only representatives
  • Distributors
  • About downstream users
  • More on downstream user responsibilities
  • Other issues affecting downstream users
  • Formulators
  • Communication in the supply chain infographic
  • Notification of substances in articles
  • Communication in the supply chain
  • Alternatives to animal testing under REACH
  • National Inspectorates
  • Nanomaterials Expert Group
  • Assessment of regulatory needs
  • PBT assessment
  • Endocrine disruptor assessment
  • RIME+ Platform
  • Management of PBT/vPvB substances under REACH
  • Supporting activities
  • List of PBT Expert Group members
  • List of Endocrine Disruptor Expert Group members
  • REACH Exposure Expert Group
  • PETCO Working Group
  • Comparing relative release potential
  • Metals and Inorganics Sectoral Approach
  • ECHA-CEFIC collaboration on dossier compliance
  • Addressing substances of concern

REACH is a regulation of the European Union, adopted to improve the protection of human health and the environment from the risks that can be posed by chemicals, while enhancing the competitiveness of the EU chemicals industry. It also promotes alternative methods for the hazard assessment of substances in order to reduce the number of tests on animals. 

In principle, REACH applies to all chemical substances; not only those used in industrial processes but also in our day-to-day lives, for example in cleaning products, paints as well as in articles such as clothes, furniture and electrical appliances. Therefore, the regulation has an impact on most companies across the EU.

REACH places the burden of proof on companies. To comply with the regulation, companies must identify and manage the risks linked to the substances they manufacture and market in the EU. They have to demonstrate to ECHA how the substance can be safely used, and they must communicate the risk management measures to the users.

If the risks cannot be managed, authorities can restrict the use of substances in different ways. In the long run, the most hazardous substances should be substituted with less dangerous ones.

REACH stands for Registration, Evaluation, Authorisation and Restriction of Chemicals. It entered into force on 1 June 2007.

How does REACH work?

REACH establishes procedures for collecting and assessing information on the properties and hazards of substances.

Companies need to register their substances and to do this they need to work together with other companies who are registering the same substance.

ECHA receives and evaluates individual registrations for their compliance, and the EU Member States evaluate selected substances to clarify initial concerns for human health or for the environment. Authorities and ECHA's scientific committees assess whether the risks of substances can be managed.

Authorities can ban hazardous substances if their risks are unmanageable. They can also decide to restrict a use or make it subject to a prior authorisation.

REACH's effect on companies

REACH impacts on a wide range of companies across many sectors, even those who may not think of themselves as being involved with chemicals.

In general, under REACH you may have one of these roles:

Manufacturer: If you make chemicals, either to use yourself or to supply to other people (even if it is for export), then you will probably have some important responsibilities under REACH.

Importer: If you buy anything from outside the EU/EEA, you are likely to have some responsibilities under REACH. It may be individual chemicals, mixtures for onwards sale or finished products, like clothes, furniture or plastic goods.

Downstream users: Most companies use chemicals, sometimes even without realising it, therefore you need to check your obligations if you handle any chemicals in your industrial or professional activity. You might have some responsibilities under REACH.

Companies established outside the EU: If you are a company established outside the EU, you are not bound by the obligations of REACH, even if you export their products into the customs territory of the European Union. The responsibility for fulfilling the requirements of REACH, such as registration lies with the importers established in the European Union, or with the only representative of a non-EU manufacturer established in the European Union.

  • Getting started with EU chemicals legislation
  • Information on REACH for companies established outside the EU

Facebook

Questions and Answers

  • Questions Answers on REACH
  • Questions and Answers on Import of substances into the EU
  • Questions and Answers on Only Representative of non-EU manufacturer

See also under the Chemicals in our Life website

Image

  • How are chemicals controlled
  • Industry to register chemicals

Safety precautions and exposure

Welcome to the ECHA website. This site is not fully supported in Internet Explorer 7 (and earlier versions). Please upgrade your Internet Explorer to a newer version.

Close Do not show this message again

This website uses cookies to ensure you get the best experience on our websites.

Close Find out more on how we use cookies.

An official website of the United States government

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

CMS Newsroom

Search cms.gov.

  • Physician Fee Schedule
  • Local Coverage Determination
  • Medically Unlikely Edits

Data & Reports

All CMS Innovation Center Models are rigorously evaluated, and the CMS Innovation Center makes some model data available to researchers and other model data available publicly. Learn how to access model data and evaluation reports.

Get Model Data

CMMI makes model data available to ensure transparency and support external research and learning.

  • Access requires registration with associated fees.
  • Additional models (such as Next Generation ACO, Comprehensive ESRD Care, Pioneer ACO, and Million Hearts), may also be available by searching ResDAC directly .
  • Public Use Files on models are available through data.cms.gov (search by model name or ‘innovation’).
  • Individual CMMI model pages may also contain data ( search models ).

View Evaluation Reports

Find evaluation reports for CMMI models by browsing or searching below.

Get more information about how CMMI conducts model evaluations , including the difference between model participant financial results and model evaluation spending results.

Additional Information

  • HealthData.gov A public resource designed to bring together high-value datasets, tools, and applications using data about health and health care to support your need for better knowledge and to help you to solve problems. These datasets and tools have been gathered from agencies across the Federal government with the goal of improving health for all Americans. expand Right Caret Read more about HealthData.gov
  • CMS Data Compendium The CMS Office of Information Products and Data Analytics produces an annual CMS Data Compendium to provide key statistics about CMS programs and national health care expenditures. The CMS Data Compendium contains historic, current, and projected data on Medicare enrollment and Medicaid recipients, expenditures, and utilization. Data pertaining to budget, administrative and operating costs, individual income, financing, and health care providers and suppliers are also included. National health expenditure data not specific to the Medicare or Medicaid programs is also included making the CMS Data Compendium one of the most comprehensive sources of information available on U.S. health care finance. This CMS report is published annually in electronic form and is available for each year from 2002 through present. expand Right Caret Read more about CMS Data Compendium

This website may not work correctly because your browser is out of date. Please update your browser .

What is evaluation?

There are many different ways that people use the term 'evaluation'. 

At BetterEvaluation, when we talk about evaluation, we mean:

any systematic process to judge merit, worth or significance by combining evidence and values

That means we consider a broad range of activities to be evaluations, including some you might not have thought of as 'evaluations' before. We might even consider you to be an evaluator, even if you have never thought of yourself as an evaluator before!

Different labels for evaluation

When we talk about evaluation, we also include evaluation known by different labels:

  • Impact analysis
  • Social impact analysis
  • Appreciative inquiry
  • Cost-benefit assessment

Different types of evaluation

When we talk about evaluation we include many different types of evaluation - before, during and after implementation, such as:

  • Needs analysis —​ ​which analyses and prioritises needs to inform planning for an intervention​
  • Ex-ante impact evaluation — which predicts the likely impacts of an intervention to inform resource allocation
  • Process evaluation —​ which examines the nature and quality of implementation of an intervention​
  • Outcome and impact evaluation —​ which examines the results of an intervention​
  • Sustained and emerging impacts evaluations —​ which examine the enduring impacts of an intervention sometime after it has ended​
  • Value-for-money evaluations —​ which examine the relationship between the cost of an intervention and the value of its positive and negative impacts​
  • Syntheses of multiple evaluations —​ which combine evidence from multiple evaluations​

Monitoring and evaluation

When we talk about evaluation we include discrete evaluations and ongoing monitoring, including:

  • Performance indicators and metrics
  • Integrated monitoring and evaluation systems

Evaluations by different groups

When we talk about evaluation we include evaluations done by different groups, such as:

  • External evaluators
  • Internal staff
  • Communities
  • A hybrid team

Evaluation for different purposes

When we talk about evaluation we include evaluations that are intended to be used for different purposes:

  • Formatively, to make improvements
  • Summatively, to inform decisions about whether to start, continue, expand or stop an intervention.

Formative evaluation is not the same as process evaluation. Formative evaluation refers to the intended use of an evaluation (to make improvements); process evaluation refers to the focus of an evaluation (how it is being implemented).

As you can see, our definition of evaluation is broad. The resources on BetterEvaluation are designed with this in mind, and we hope they will help you in a range of evaluative activities.

How is this different to what other people mean by 'evaluation'?

Not everyone defines evaluation in this way because of their diverse professional and educational backgrounds and training and organisational context. Be aware that people might define evaluation differently, and consider the implications of the labels and definitions that are used.

For example, some organisations use a definition of evaluation that focuses only on understanding whether or not an intervention has met its goals. However, this definition would not include a process evaluation, which might be used to check the quality of implementation and provide timely information to guide improvements. And it would not include a more comprehensive impact evaluation that considered unintended impacts (positive and negative) as well as intended impacts identified as goals.

Some organisations refer only to formal evaluations that are contracted out to external evaluators, which leaves out important methods for self-evaluation, peer evaluation and community-led evaluation.

A brief (4-page) overview that presents a statement from the American Evaluation Association defining evaluation as "a systematic process to determine merit, worth, value or significance".

The statement covers the following areas:

Back to top

© 2022 BetterEvaluation. All right reserved.

NTRS - NASA Technical Reports Server

Available downloads, related records.

Juba, a city on the Nile, South Sudan

Water security

  • Intergovernmental Hydrological Programme (IHP)
  • World Water Assessment Programme (WWAP)

To achieve water security, we must protect vulnerable water systems, mitigate the impacts of water-related hazards such as floods and droughts, safeguard access to water functions and services and manage water resources in an integrated and equitable manner.

UNESCO works to build the scientific knowledge base to help countries manage their water resources in a sustainable way through the Intergovernmental Hydrological Programme (IHP) and the World Water Assessment Programme (WWAP), through leading the UN-wide World Water Development Report and through numerous Centres and Chairs on water around the world.

UN World Water Development Report 2024

Official celebration of World Water Day 2024 on 22 March

22 March 2024

UNESCO's expertise

definition of a evaluation report

The only intergovernmental, science-based water cooperation programme of the United Nations system

definition of a evaluation report

Understanding the state, use and management of the world’s freshwater resources and designing better water policies

World Water Day

Reflecting on Five Insightful UNESCO Water Webinars

Publications

0000388948

IMAGES

  1. FREE Evaluation Report Template

    definition of a evaluation report

  2. Components of An Evaluation Report

    definition of a evaluation report

  3. How to Write Evaluation Reports: Purpose, Structure, Content

    definition of a evaluation report

  4. FREE 14+ Sample Evaluation Reports in Google Docs

    definition of a evaluation report

  5. how to write evaluation report

    definition of a evaluation report

  6. Test Evaluation Report |Professionalqa.com

    definition of a evaluation report

VIDEO

  1. What is evaluation? |Principles of evaluation process| Tools and techniques of evaluation|

  2. Lecture:1{ Introduction Of Definite Integration}

  3. What is Evaluation in M&E

  4. Theorems and Operations on Limits

  5. Evaluation Meaning in Hindi

  6. Difference between Assessment and Evaluation

COMMENTS

  1. How to Write Evaluation Reports: Purpose, Structure, Content

    What is an Evaluation Report? An evaluation report is a document that presents the findings, conclusions, and recommendations of an evaluation, which is a systematic and objective assessment of the performance, impact, and effectiveness of a program, project, policy, or intervention. The report typically includes a description of the evaluation's purpose, scope, methodology, and data sources ...

  2. Writing an Evaluation Report

    The purpose of an evaluation report is to provide an assessment and thorough analysis of a product, service, program, or policy. This assessment should adhere to some defined criteria and ...

  3. PDF What is program evaluation?

    Definition of Evaluation 1. An evaluation is an assessment, conducted as systematically and impartially as possible, of an activity, project, programme, strategy, policy, topic, theme, sector, operational area or institutional performance. It analyses the level of achievement of both expected and ... Evaluation aims to understand why — and to ...

  4. What Is Evaluation?: Perspectives of How Evaluation Differs (or Not

    The definition problem in evaluation has been around for decades (as early as Carter, 1971), and multiple definitions of evaluation have been offered throughout the years (see Table 1 for some examples). One notable definition is provided by Scriven (1991) and later adopted by the American Evaluation Association (): "Evaluation is the systematic process to determine merit, worth, value, or ...

  5. Constructing an evaluation report

    Typical problems with findings. Typical problems with conclusions. Typical problems with recommendations. Choose the best approach for structuring the report. Other key sections of the report. Reader-friendly style. Table 1: Suggested outline for an evaluation report. Table 2: The quick reference guide for a reader-friendly technical style.

  6. Evaluation report writing

    BetterEvaluation is part of the Global Evaluation Initiative, a global network of organizations and experts supporting country governments to strengthen monitoring, evaluation, and the use of evidence in their countries. The GEI focuses support on efforts that are country-owned and aligned with local needs, goals and perspectives.

  7. Evaluation.gov

    Evaluation 101 provides resources to help you answer those questions and more. You will learn about program evaluation and why it is needed, along with some helpful frameworks that place evaluation in the broader evidence context. Other resources provide helpful overviews of specific types of evaluation you may encounter or be considering ...

  8. Final reports

    Your detailed evaluation report. This online guide to creating final evaluation reports provides a setp-by-step approach to developing a final report. Evaluation report layout checklist. This checklist from Stephanie Evergreen distills the best practices in graphic design and has been particularly created for use on evaluation reports.

  9. What Is Evaluation?: Perspectives of How Evaluation Differs (or Not

    Program evaluation is the process of systematically gathering empirical data and contextual information about an intervention program—specifically answers to what, who, how, whether, and why questions that will assist in assessing a program's planning, implementation, and/or effectiveness. Figure 1.

  10. Guide: Academic Evaluations

    A Definition of Evaluation. Kate Kiefer, English Professor Like most specific assignments that teachers give, writing evaluations mirrors what happens so often in our day-to-day lives. Every day we decide whether the temperature is cold enough to need a light or heavy jacket; whether we're willing to spend money on a good book or a good movie ...

  11. PDF WHAT IS EVALUATION?

    evaluation team should have a significant vested interest in whether the results are good or bad). This is not always a requirement (e.g., managers in all kinds of organizations frequently report on the performance of their own units, prod-ucts, and/or people), but this credibility or independence issue is definitely one

  12. Evaluation Practice Handbook

    3.8 Preparing the inception report 48 Chapter 4. Conducting the evaluation 50 4.1 Identifying information needs and data collection methods 50 4.2 Briefing and supporting the evaluation team 56 4.3 Ensuring quality 58 Chapter 5. Reporting 61 5.1 Preparing the draft evaluation report 61 5.2 The final evaluation report 63 Chapter 6.

  13. PDF Reporting in Evaluation

    EVALUATION REPORTS Reconsider audiences throughout the evaluation. Audiences for reports need to be reconsidered during and after an evaluation. Usually, an evaluation's original design is modified as evaluation work progresses and may require new audience considerations. Finally, when an evaluation -has

  14. Evaluation Reporting: A Guide to Help Ensure Use of Evaluation Findings

    This guide is one in a series of Evaluation Basic guides for evaluating heart disease and stroke prevention activities. This guide focuses on ensuring evaluation use through evaluation reporting. Various aspects of evaluation reporting can affect how information is used. Programs should consider stakeholder needs, the evaluation purpose, and ...

  15. Evaluation

    Evaluation. In common usage, evaluation is a systematic determination and assessment of a subject's merit, worth and significance, using criteria governed by a set of standards. It can assist an organization, program, design, project or any other intervention or initiative to assess any aim, realizable concept/proposal, or any alternative, to ...

  16. What is a Board Evaluation Report? (Overview, Definition, and Examples)

    A board evaluation report is a simple document that communicates the performance and effectiveness of a company's board of directors. The report typically covers the board's composition, communication, decision-making, and overall effectiveness. It also may include recommendations for improvements when it comes to the board's performance.

  17. Reporting

    Reporting. The evaluation reports should include relevant and comprehensive information structured in a manner that facilitates its use but also provide transparency in terms of the methods used and the evidence obtained to substantiate the conclusions and recommendations. Evaluation, by definition, answers evaluative questions, that is ...

  18. PDF Types of Evaluation

    outputs. You may conduct process evaluation periodically throughout the life of your program and start by reviewing the activities and output components of the logic model (i.e., the left side). Results of a process evaluation will strengthen your ability to report on your program and use information to improve future activities.

  19. EVALUATION

    EVALUATION meaning: 1. the process of judging or calculating the quality, importance, amount, or value of something…. Learn more.

  20. Evaluation Definition & Meaning

    evaluation: [noun] the act or result of evaluating : determination of the value, nature, character, or quality of something or someone.

  21. Clinical Evaluation: What is a Clinical Evaluation Report (CER)?

    A Clinical Evaluation Report (CER) is a comprehensive document that summarizes the results of the clinical evaluation process for a medical device. It provides a detailed analysis of the clinical data collected, along with an assessment of the device's safety, performance, and intended use. The CER serves as a critical tool for demonstrating ...

  22. EVALUATION

    EVALUATION definition: 1. the process of judging or calculating the quality, importance, amount, or value of something…. Learn more.

  23. Understanding REACH

    Understanding REACH. REACH is a regulation of the European Union, adopted to improve the protection of human health and the environment from the risks that can be posed by chemicals, while enhancing the competitiveness of the EU chemicals industry. It also promotes alternative methods for the hazard assessment of substances in order to reduce ...

  24. Evaluations & Research Reports

    Data & Reports. All CMS Innovation Center Models are rigorously and independently evaluated. Best practices and lessons learned from evaluation reports are often used to inform the next iterations of model tests. Get more information about how CMMI conducts model evaluations, including the difference between model participant financial results ...

  25. What is evaluation?

    A brief (4-page) overview that presents a statement from the American Evaluation Association defining evaluation as "a systematic process to determine merit, worth, value or significance". The statement covers the following areas: There are many different ways that people use the term 'evaluation'. At BetterEvaluation, when we talk about ...

  26. NTRS

    Evaluation, Analysis, and Application of Internal Strain-Gage Balance Data Experimental processes, analytical methods, and numerical algorithms are described that may be used to predict the forces and moments of an internal strain-gage balance during a wind tunnel test. First, the control volume model of a strain-gage balance and the concepts of load state, load space, and output space are ...

  27. Water security

    Freshwater is the most important resource for humankind, cross-cutting all social, economic and environmental activities. It is a condition for all life on our planet, an enabling or limiting factor for any social and technological development, a possible source of welfare or misery, cooperation or conflict.