The Evidence Portal

Evidence Portal Technical Specifications

The Evidence Portal Technical Specifications (PDF , 2.2 MB) describe the method researchers follow to conduct an evidence review for the Evidence Portal. They provide guidance, explanations and examples to ensure the process is applied consistently.

The Technical Specifications ensure: 

  • research questions are clear, relevant and practicable 
  • search methods are systematic, transparent and replicable 
  • studies are assessed using a rigorous and consistent process 
  • identified programs are evaluated using the same evidence rating scale 
  • core components and flexible activities are identified using a consistent process 
  • summaries of programs and activities are clear, simple and easy to apply. 

The following principles guided the development of the Technical Specifications: 

  • Rigorous – informed by high quality standards for the assessment of evidence, programs and practices. 
  • Usable – detail clear practices and activities that are easy to understand and implement 
  • Replicable – external stakeholders should be able to replicate any of the processes and procedures described 
  • Transparent – explicit guidelines for data collection and decision making are described so any user of the Evidence Portal can understand how it was populated 

You can download a copy of the Technical Specifications here: Evidence Portal Technical Specifications. (PDF , 2.2 MB)

The Technical Specifications (PDF , 2.2 MB) have stringent criteria for the type of program studies that can be included in an evidence review.  All evidence reviews only include the following studies to identify programs:
  • Systematic reviews (with or without meta analyses): Systematic reviews summarise and synthesise the findings of multiple studies. Meta Analyses combine the results of many studies of the same program into a single evaluation.
  • Randomised control trials (RCTs): RCTs compare people receiving a service (treatment group) to people who do not receive a service (control group) to see if there is a significant difference in their outcomes. RCTs randomly assign participants to a treatment group or control group. This means they have greater control over factors that might influence a person’s outcomes. This is the best way to understand the impact of a program.
  • Quasi-experimental designs (QEDs): QEDs are similar to RCTs. They compare the outcomes of people who have received a service and people who haven’t. However, they don’t randomly allocate people to a treatment or control group.
  • Dismantling studies: These studies identify the various components of a program and test the effectiveness of each component on its own.

Programs relevant to the evidence review are identified in the current literature and evaluated for their effectiveness. 

Evidence-informed programs included in the Evidence Portal are those from studies that meet the above criteria and that were found to have a positive effect on at least one client outcome.

What are core components?

Core components are program components that are common across evidence-informed programs. 

What are flexible activities?

Flexible activities are examples from the literature of ways each core component can be implemented. 

The Evidence Rating Scale

The Technical Specifications include an evidence rating scale. This scale is used to rate the quality of research evidence for program. 

This scale has considered and adapted methodologies from other publicly available evidence rating scales.

The Evidence rating scale

Rating Evidence Rating Scale Description
Well supported by research evidence
  • At least one high-quality* systematic review with meta-analyses based on RCT studies reports statistically significant positive effects for at least one outcome.
  • No studies show statistically significant adverse effects.
Supported research evidence 
  • At least two high-quality RCT/QED studies report statistically significant positive effects for at least one outcome, AND
  • Fewer RCT studies of similar size and quality show no observed effects than show statistically significant positive effects for the same outcome(s), AND
  • No RCT studies show statistically significant adverse effects.  
Promising research evidence
  • At least one high-quality RCT/QED study reports statistically significant positive effects for at least one outcome, AND
  • Fewer RCT/QED studies of similar size and quality show no observed effects than show statistically significant positive effects, AND
  • No RCT/QED studies show statistically significant adverse effects.
Mixed research evidence (with no adverse effects)
  • At least one high-quality RCT/QED study reports statistically significant positive effects for at least one outcome, AND
  • An equal number or more RCT/QED studies of similar size and quality show no observed effects than show statistically significant positive effects, AND
  • No RCT/QED studies show statistically significant adverse effects.
Mixed research evidence (with adverse effects)
  • At least one high-quality RCT/QED study reports statistically significant adverse effects for at least one outcome, AND
  • An equal number or more RCT/QED studies show no observed effects than show statistically significant adverse effects, AND/OR
  • At least one high-quality RCT/QED study shows statistically significant positive effects for at least one outcome.
Evidence fails to demonstrate effect
  • At least one high-quality systematic review with meta-analyses based on RCT/QED studies reports no observed effects for all reported outcomes, OR
  • At least one high-quality RCT study reports no observed effects for all reported outcomes.
  • Criteria are not met for mixed research evidence (with or without adverse effects)
Evidence demonstrates adverse effects
  • At least one high-quality systematic review with meta-analyses based on RCT/QED study reports statistically significant adverse effects for at least one outcome, OR
  • At least one high-quality RCT/QED study reports statistically significant adverse effects for at least one outcome, AND
  • Fewer RCT/QED studies show no observed effects, AND/OR
  • No RCT/QED studies show statistically significant positive effects.
*On this rating scale, high-quality indicates studies with low-to-moderate risk of bias. 

Evidence Review Process

The process to conduct an evidence review for the Evidence Portal is outlined below.

For more detailed information see the Evidence Portal Technical Specifications (PDF , 2.2 MB).

Step Description
Step 1: Define research question and scope Define the research question, key concepts and terms. Identify what will be in and out of scope and what databases will be searched.
Step 2: Search for evidence

Develop a search strategy to identify literature relevant to the research question. Three comprehensive and widely used databases must be searched. Additional databases can be used as needed.

Establish data management processes to record and manage literature searches and screening.

Step 3: Screen studies Screen studies for study scope and design. This ensures studies relevant to the research question and fit the search criteria. Any studies that do not meet the criteria are excluded from the evidence review.
Step 4: Assess for risk of bias

Assess each study for risk of bias. This is important to make sure the evidence portal only includes the highest quality evidence. Each study is checked for things like study design, follow up rates and sampling.

Studies are categorised according to low, moderate or high risk of bias. Those with a high risk of bias are excluded from the evidence review.

Step 5: Extract data Data is extracted from all the included studies using a data extraction template. The template includes relevant information about the study and the program, e.g. sample size and characteristics, program details, client outcomes and effectiveness.
Step 6: Rate the evidence for programs

After the data extraction is complete, we identify and rate the evidence for each program.

Programs are identified from the final list of studies that met the inclusion criteria. The Evidence Rating Scale is used to rate the evidence for each program. First, the evidence is rated for each outcome domain. Then each program is given an overall evidence rating and an overall direction of effect (Positive, mixed, no effect, negative).

This helps us understand how strong the evidence is for each program, and what type of effect the program had on client outcomes.

Summaries of each program are then written. Each summary clearly describes each program, the target group, the outcomes it contributes to, the strength of the evidence and any implementation considerations. Summaries of evidence-informed programs – that is, those that were found to have a positive effect on at least one client outcome – are included on the Evidence Portal.

Step 7. Identify core components and flexible activities

Core components and flexible activities are extracted from the evidence-informed programs identified. This involves conducting a content analysis of the program summaries to identify the types of activities in each program and the way those activities are implemented. The core components and flexible activities are tested with key stakeholders.

A summary of the core components is written to feature on the Evidence Portal. It describes what the core components are, relevant target groups and client outcomes. Summaries of each flexible activity are also written. These describe the activity, who it’s for and how it can be implemented.

Step 8. Summarise evidence review findings An Evidence to Action Note is written that clearly outlines the purpose of the evidence review, key findings and implications for policy and practice.

Expertise required to use the Technical Specifications

A broad range of technical skills, competencies and experience are required to use the Evidence Portal Technical Specifications (PDF , 2.2 MB).

A research librarian is needed to apply and modify search strategies as required. Technical staff are needed to screen studies, extract data and critically appraise studies. Subject-matter experts are needed to identify core components and flexible activities and to test these with relevant stakeholders. Additionally, project management support is required.

Last updated:

13 Jun 2022

Was this content useful?
We will use your rating to help improve the site.
Please don't include personal or financial information here
Please don't include personal or financial information here

We acknowledge Aboriginal people as the First Nations Peoples of NSW and pay our respects to Elders past, present, and future. 

Informed by lessons of the past, Department of Communities and Justice is improving how we work with Aboriginal people and communities. We listen and learn from the knowledge, strength and resilience of Stolen Generations Survivors, Aboriginal Elders and Aboriginal communities.

You can access our apology to the Stolen Generations.

Top Return to top of page Top