The Evidence Portal

Understanding the 'Strength of Evidence' rating

In the Evidence Portal, each program is given ‘strength of evidence’ rating. This helps us understand the quality and volume of evidence that sits behind each program. The Evidence Portal uses the Evidence Rating Scale detailed in the Technical Specifications

The Evidence Rating Scale

There are 7 different ratings a program can receive:

  • Well supported by research evidence
  • Supported research evidence
  • Promising research evidence
  • Mixed research evidence (with no adverse effects)
  • Mixed research evidence (with adverse effects)
  • Evidence fails to demonstrate effect
  • Evidence demonstrates adverse effects

The rating a program receives depends on the:

  • Type of study conducted (e.g. systematic review with meta-analysis, randomised controlled trial or quasi-experimental study)
  • The number of studies that have evaluated the program
  • The outcomes the study reports on and if the findings are significant or non-significant
  • The type of effect the study has on outcomes, i.e. if they’re positive, negative or neutral.

The approach for rating evidence considered and adapted the method of other publicly available evidence rating scales, including the Early Intervention Foundation Evidence Standards and the What Works Clearinghouse Procedures and Standards Handbook (Version 4.0) (United States Department of Education, 2017).

Rating outcomes and programs

To rate a program, first we rate each outcome domain the program reports on. We review the outcomes each study reports on for a program, identify if the direction of effect, and then identify where the outcome sits on the Evidence Rating Scale. See Section 2.6.4 in the Technical Specifications.

Once we’ve rated the evidence for each outcome domain, we then give each program an overall evidence rating. This is done by looking at the outcomes the program contributes to and how many studies have evaluated the program to identify where the program sits on the Evidence Rating Scale. See Section 2.6.5 in the Technical Specifications.

Understanding the Evidence Rating Scale

What does ‘well supported by research evidence’ mean?

A program is given the rating ‘well supported by research evidence if a systematic review with a meta-analysis has been conducted on the program and the meta-analysis found that the program had a positive impact of client outcomes.

What does ‘supported research evidence’ mean?

A program is given the rating ‘supported research evidence’ if at least two evaluations of the same program have been conducted and those evaluations show the program had a positive impact on client outcomes.  A program will NOT be given this rating if any adverse (negative) outcomes are found.

What does ‘promising research evidence’ mean?

A program is given the rating ‘promising research evidence’ if at least one evaluation shows the program has a positive impact of client outcomes. A program will NOT be given this rating if any adverse (negative) outcomes are found. 

What does ‘mixed research evidence’ mean?

Programs can have mixed evidence if at least one client outcome was positive, another was neutral and/or another was negative. 

Programs with at least one negative outcome and a neutral or positive outcome are rated as ‘mixed research evidence (with adverse effects)’. Caution should be used in implementing these programs. This is because the program could have a negative impact on a particular outcome for your clients. However, we can still use information about these programs to understand what doesn’t work. 

Programs with a combination of positive and neutral outcomes are rated as ‘mixed research evidence (with no adverse effects)’. Caution should be used in implementing these programs also. You should carefully review the client outcomes the program can have a positive impact on and the outcomes it is unlikely to achieve.

What does ‘evidence fails to demonstrate effect’ mean?

A program is given the rating ‘evidence fails to demonstrate effect’ if an evaluation shows that the program did not have a positive or negative effect on client outcomes. 

While the program may not be effective in the specific context it was evaluated in, information about that program could still be useful to help us understand what does and doesn’t work for our clients. If the evidence shows that a program has no benefit, then it is recommended to consider alternative programs or activities.

What does ‘evidence demonstrates adverse effects’ mean?

A program is given the rating ‘evidence demonstrates adverse effects’ if an evaluation shows that the program only had a negative impact on client outcomes. 

It is not recommended that these programs are implemented. However, they have been included on the Evidence Portal so we can understand what programs and activities don’t work. 

The Evidence rating scale

Rating Evidence Rating Scale Description
Well supported by research evidence
  • At least one high-quality* systematic review with meta-analyses based on randomised controlled trial (RCT) studies reports statistically significant positive effects for at least one outcome.
  • No studies show statistically significant adverse effects.
Supported research evidence 
  • At least two high-quality RCT/quasi-experimental design (QED) studies report statistically significant positive effects for at least one outcome, AND
  • Fewer RCT studies of similar size and quality show no observed effects than show statistically significant positive effects for the same outcome(s), AND
  • No RCT studies show statistically significant adverse effects.  
Promising research evidence
  • At least one high-quality RCT/QED study reports statistically significant positive effects for at least one outcome, AND
  • Fewer RCT/QED studies of similar size and quality show no observed effects than show statistically significant positive effects, AND
  • No RCT/QED studies show statistically significant adverse effects.
Mixed research evidence (with no adverse effects)
  • At least one high-quality RCT/QED study reports statistically significant positive effects for at least one outcome, AND
  • An equal number or more RCT/QED studies of similar size and quality show no observed effects than show statistically significant positive effects, AND
  • No RCT/QED studies show statistically significant adverse effects.
Mixed research evidence (with adverse effects)
  • At least one high-quality RCT/QED study reports statistically significant adverse effects for at least one outcome, AND
  • An equal number or more RCT/QED studies show no observed effects than show statistically significant adverse effects, AND/OR
  • At least one high-quality RCT/QED study shows statistically significant positive effects for at least one outcome.
Evidence fails to demonstrate effect
  • At least one high-quality systematic review with meta-analyses based on RCT/QED studies reports no observed effects for all reported outcomes, OR
  • At least one high-quality RCT study reports no observed effects for all reported outcomes.
  • Criteria are not met for mixed research evidence (with or without adverse effects)
Evidence demonstrates adverse effects
  • At least one high-quality systematic review with meta-analyses based on RCT/QED study reports statistically significant adverse effects for at least one outcome, OR
  • At least one high-quality RCT/QED study reports statistically significant adverse effects for at least one outcome, AND
  • Fewer RCT/QED studies show no observed effects, AND/OR
  • No RCT/QED studies show statistically significant positive effects.
*On this rating scale, high-quality indicates studies with low-to-moderate risk of bias. 
Last updated:

28 Mar 2022

Was this content useful?
We will use your rating to help improve the site.
Please don't include personal or financial information here
Please don't include personal or financial information here

We acknowledge Aboriginal people as the First Nations Peoples of NSW and pay our respects to Elders past, present, and future. 

Informed by lessons of the past, Department of Communities and Justice is improving how we work with Aboriginal people and communities. We listen and learn from the knowledge, strength and resilience of Stolen Generations Survivors, Aboriginal Elders and Aboriginal communities.

You can access our apology to the Stolen Generations.

Top Return to top of page Top