What are the 3 rapid evaluation questions?

  • When to evaluate?
  • Identifying and prioritising if and when an evaluation is required
  • Different types of evaluation
  • Evaluation at different stages of the policy cycle

The appropriate timing and type of evaluation required to support continuous improvement, accountability and decision-making needs to be determined on a case-by-case basis to ensure the overall approach is fit for purpose.

It is not feasible, cost effective or appropriate to fully evaluate all government activities and programs. The cost of evaluation must be balanced against the risk of not evaluating, noting that sometimes performance monitoring by itself will be sufficient to meet the performance reporting requirements under the Public Governance, Performance and Accountability Act 2013.

In some cases, well-designed data collection and performance monitoring established during the design phase of a new or amended program or activity can help to refine it over time. In other cases, evaluation will need to be more comprehensive to assess whether an activity or program is appropriate, effective and/or efficient.

Taking a strategic, risk-based approach can help to:

  • identify and prioritise when an evaluation is required to complement or enhance routine performance monitoring and reporting
  • determine which type of evaluation is most appropriate
  • identify what key questions need to be addressed in an evaluation
  • incorporate the right level of evaluation planning at the initial program design stage to ensure there is an appropriate evidence base for future evaluations or reviews
  • collect the required data for monitoring and evaluation throughout program implementation and align this to existing data collections where possible.
     

Identifying and prioritising if and when an evaluation is required

What to evaluate, when to evaluate, and the type of evaluation needed in a particular circumstance depends on a range of factors. A strategic, risk-based approach can help to determine when and how evaluations are selected, prioritised and scaled based on the value, impact and risk profile of a particular government activity or program.

The topic, scope and size of each proposed evaluation will vary, so these issues will not apply in the same way or have the same relative importance in all evaluations.

Government or entity priority

Is the activity, program or evaluation topic a government initiative or directly connected to a government priority or an entities’ objective? Is there a Cabinet or ministerial directive to undertake a comprehensive evaluation?

Is there a Ministerial Statement of Expectations?

Stakeholder priority

Does the activity, program or evaluation topic relate to a priority issue for the sector or other key stakeholders?
Does the activity or program have an important relationship to other program areas?

Monitoring, review and stakeholder feedback

Have issues with the activity or program objectives, implementation or outcomes been identified through monitoring, review or stakeholder feedback? Does performance information suggest areas where improvements are required?

Has termination, expansion, extension or change in the activity or program been proposed?

High profile, sensitivity or cost

Does the activity, program or evaluation topic have a high profile, high sensitivity or a high cost?
Has it attracted significant public attention or criticism?

Evaluation commitment

Is there a commitment to the evaluation in a new policy proposal, an Australian Government budget process or a public statement?
Are there any sunsetting legislative instruments or Regulation Impact Statement (RIS) requirements that apply?

Previous evaluations

Has the activity, program or topic been evaluated previously?

This may help determine if a new evaluation is worthwhile, particularly if:

  • some time has lapsed since the previous evaluation,
  • the previous evaluation pointed to a need for change, or
  • there has been a significant change in the program or activity.

Other relevant activity

Does the evaluation overlap with or duplicate relevant monitoring, review or audit activity?

Internal/external

Would the evaluation be most appropriately conducted internally, by external providers or a combination of both?

Timeframes

When should the evaluation be delivered to best inform decision making?

Resources and funding

What are the expected duration, costs and resource requirements?

Data and information sources

What relevant data and information sources are available?
What new sources may be required?

Impact

What are the expected impacts for an entities’ activities, programs, policies, processes and stakeholders?

Risk – conducting evaluation

What are the potential risks associated with conducting the evaluation?

Risk – not conducting evaluation

What are the potential risks associated with not conducting the evaluation?

Other considerations

Are there other factors specific to the evaluation that need to be considered?

Different types of evaluation

Different types of evaluations draw on a range of methods and tools to measure and assess performance. These approaches can be organised in different ways, with common distinctions made between formative and summative evaluations. Rapid approaches are also being used in the public sector to meet the information needs of decision-makers in uncertain, resource-constrained environments.

Questions about the level of need, policy design and implementation/process improvements are usually best answered during the early design and implementation/delivery stages of a program or activity (these evaluation approaches and techniques are generally referred to as formative evaluation).

Questions about program outcomes and impacts are usually best answered near or at the end of the policy/program or after it has matured (these evaluation approaches and techniques are generally referred to as summative evaluation).

What type of evaluation is best suited in a particular situation depends on a combination of:

  • the stage and maturity of the program or activity
  • the issue or question being investigated
  • what data or information is already available
  • the timing of when evaluation findings are required to support continuous improvement, accountability or decision-making.

It is important to select tools and approaches that are fit for purpose based on the specific program or activity and the purpose of the evaluation.

See How to evaluate for information about things to consider at each stage of planning and conducting an evaluation, including defining its purpose, areas of focus and design.

Evaluation at different stages of the policy cycle

Evaluations can be used to support continuous improvement, risk management, accountability and decision-making at various stages in the policy cycle.  One way of doing this is through a program logic model. The purpose of a program logic model is to show how a program works and to draw out the relationships between resources, activities and outcomes. While this program logic is presented in a linear fashion, in practice there will be continual feedback loops where evaluation findings and performance information help to inform continuous improvements in policy development and program design.

What are the 3 rapid evaluation questions?

Broadly, the stages when an evaluation is conducted include:

  • Before a program or activity is implemented (i.e. in the early design phase)
  • During the implementation and/or ongoing delivery phase of a program or activity
  • After a program or activity has been in operation for some time
  • Rapid evaluation of a program or activity to inform urgent decision-making.

What are the 3 rapid evaluation questions?

Questions: Will it work? To what extent is the need being met? What can be done to address this need? What is the current state?

Areas of focus:

  • needs assessment
  • problem statement
  • supporting design
  • testing assumptions and theory
  • feasibility (in terms of relevance and performance) of multiple scenarios or options
  • establish baseline data and information to assess impacts over time
  • develop a program logic to clarify the rationale for a specific proposal
  • relevant stakeholders
  • inputs and outputs
  • activities
  • anticipated short-term, medium-term and long-term outcomes.

Program characteristics: Program is innovative and in development – exploring, creating, emerging.

Type of evaluation/other terms: Implementation readiness assessment, formative evaluation.

Commonwealth requirements: Before an activity or program is implemented, entities need to plan how an activity or program will be evaluated in order to:

  • meet implementation planning requirements in the Budget and Cabinet processes
  • meet policy requirements in the Regulatory Impact Assessment Framework, the Charging Framework and the Commonwealth Grants Policy (where applicable)
  • meet legislative requirements to measure, assess, and report on performance under the Commonwealth Performance Framework
  • strengthen program design and ensure the required data for performance monitoring and evaluation can be collected throughout implementation and aligned to existing data collections, where possible.

Risk and assurance review processes, including appropriate use of the Risk Potential Assessment Tool (RPAT) and the two-pass capital works and ICT reviews, are also required for proposals that meet certain risk and materiality thresholds.

What are the 3 rapid evaluation questions?

Questions: Is it working? Is the program or activity operating as planned? What can be learned?

Areas of focus: Feedback to support early course correction, and provide insight into the program or activity’s operations, implementation and service delivery.

Program characteristics: Program is forming and under refinement – improving, enhancing, standardising.

Types of evaluation/other terms:

  • process evaluation
  • formative evaluation
  • delivery evaluation
  • implementation evaluation
  • post-implementation review
  • developmental evaluation
  • action-research.

Commonwealth requirements: During the delivery of a program or activity, Commonwealth entities have an ongoing legislative requirement to measure, assess, and report on their performance under the Commonwealth Performance Framework.

At this stage of the policy cycle, effective, fit for purpose evaluation approaches can be used:

  • to improve the design, delivery and operational processes of an activity or program
  • to address data gaps not identified or properly considered during the initial program design stage – action to address these gaps should take place as early as possible in the implementation stage of a program or activity
  • in situations where issues with the activity or program objectives, delivery or outcomes have been identified through monitoring, review or stakeholder feedback
  • to support continuous improvement by complementing or enhancing the quality and robustness of an entity's performance information.

What are the 3 rapid evaluation questions?

Questions: Did it work? Is the program or activity achieving its objectives? What has the impact been? What are the benefits relative to the costs?

Areas of focus:

  • assessing performance
  • identifying lessons learned and improvements
  • evaluating the appropriateness
  • efficiency
  • effectiveness of the program or activity.

Program characteristics: Program is stabilising, well established, mature and builds on data collected in earlier stages.

Type of evaluation/other terms: Outcome evaluation, impact evaluation, summative evaluation, ex-post, theory of change evaluation, economic evaluation.

Commonwealth requirements: Commonwealth entities have an ongoing legislative requirement to measure, assess, and report on their performance under the Commonwealth Performance Framework.

Once a program or activity has matured and been in operation for some time, effective, fit for purpose evaluation approaches can be used:

  • to ensure value for money for tax payers, consider equity measures, and assess whether something else could have worked better (for less cost)
  • to address data gaps not identified or not properly considered during the initial program design stage - action to address these gaps should take place as early as possible in the implementation stage of a program or activity
  • in situations where issues with the activity or program objectives, implementation or outcomes have been identified through monitoring, review or stakeholder feedback
  • to support continuous improvement by complementing or enhancing the quality and robustness of an entity's performance information.

What are the 3 rapid evaluation questions?

Questions: What evidence is available? What action is required? What caveats do I need to put on the findings?

Areas of focus: An approach designed to quickly and systematically conduct an evaluation when time or resources are limited.

Program characteristics: The common feature of this approach is the expedited implementation timeframes, which generally range from 10 days to 6 months, for situations where a short-term or immediate outcome is expected, or a quick decision is required.

These methods have been used in multiple settings including public health, emergency management, international development and agriculture as a way to deliver program evaluation findings quickly to inform decision making, for example in a public health crisis where improvement in infection control is expected to happen within a short timeframe.

Types of evaluation/other terms: Real time evaluations, rapid feedback evaluation, rapid evaluation methods, rapid-cycle evaluation, rapid appraisal.

Commonwealth requirements: Decision-makers (i.e. government, responsible ministers, accountable authorities, and senior managers) will determine when a rapid evaluation is required to support urgent action, decision-making and accountability.

For more information, see Rapid Evaluation | Better Evaluation

-- NAVIGATING THE EVALUATION TOOLKIT --

Go to Evaluation in the Commonwealth
Go to What is evaluation?

Go to Why evaluate?
Go to Who evaluates?
Go to How to evaluate?

Templates, tools and additional resources


Page 2

A critical step for turning your evaluation into meaningful information that supports continuous improvement, accountability and decision-making is summarising and discussing the main findings. This usually takes the form of an evaluation report, but other products may also be required to meet the needs of different stakeholders (e.g. a plain English summary document, different language versions depending on the stakeholder cohort, or a presentation for participants, staff or delivery partners). The report findings should answer the evaluation questions established during the planning phase, identify any implementation challenges or limitations, and help decision-makers to understand whether the program or activity is on track and meeting its objectives. An evaluation report typically makes constructive, actionable findings/recommendations and provide lessons learned to support continuous improvement. It is important that the report meets the needs of different and diverse stakeholder groups to have maximum impact.

What do the results tell you?

  • How do the findings of your evaluation apply to the policy and/or program you were evaluating?

  • What is the significance of the findings?

  • How does it enrich an understanding of what works and what doesn’t?

  • What are the implications for the policy/program and your entity?

How will you turn the evaluation results into meaningful information for decision-makers?

  • What is the “story” you want to tell? Who is the audience?

  • What form should your report take?

  • What is the optimal time to tell people about your evaluation findings?

  • What matters, to the people that matter, at the time that it matters?

  • What are you suggesting they do with the results?

How will you share the evaluation findings?

  • Who should results be shared with? 

  • What was the objective of the evaluation (this will help define who it should be shared with)?

  • Who is the audience and what is most important to them? (Note the sensitivity, privacy and confidentiality of the evidence you have collected will help determine who and how you share your findings).

Evaluation Report

An evaluation report will typically include:

  • the issue or need addressed by the program or activity
  • the purpose and objectives of the program or activity
  • a clear description of how the program is organised and its activities
  • the methodology - how the evaluation was conducted and an explanation of why it was done this way. This should include what surveys or interview questions were used and when and how they were delivered (and a copy should be included in the appendix)
  • sampling - how many people participated in the evaluation, who they were and how they were recruited
  • data analysis - a description of how data were analysed
  • ethics - a description of how consent was obtained and how ethical obligations to participants were met
  • findings - what was learnt from the evaluation (and what it means for the program or activity) and how do the results compare with your objectives and outcomes
  • findings/recommendations - detailed and actionable suggestions for possible changes to the program or service that have come from the analysis
  • any limitations to the evaluation and how future evaluations will overcome these limitations.
     

Different groups of stakeholders may want to see the results of the evaluation in different forms.  As you conclude your evaluation, it is worthwhile checking in again with all the stakeholder groups to clarify their reporting requirements are (i.e. what needs to be reported and when).

Reporting timelines often present a major constraint on the evaluation. In particular, the need to report findings in time to inform funding and resourcing decisions for the next phase of a program often means that reports are needed before impacts can be fully observed. In these situations, it will be necessary to report on interim outcomes, and to present any research evidence that shows how these are important predictors or pre-requisites to the final impacts.

-- NAVIGATING THE EVALUATION TOOLKIT --

Go to Evaluation in the Commonwealth
Return to How to evaluate?
Templates, tools and additional resources


Page 3

Evaluations generate rich, evidence-based insights that contribute to good public administration. Irrespective of whether evaluation findings are positive, negative or neutral, evaluation reports are not intended to just “sit on the shelf” or be seen as a “tick a box” activity – they are designed to be used constructively to support continuous improvement.

An implementation plan, endorsed by senior executives, will help to implement evaluation findings and prioritise actions to be taken to improve programs and services, enhance design and delivery mechanisms, and ultimately improve outcomes.

A communication plan will also help to ensure findings are shared in the most appropriate way with stakeholders, participants, and a broader audience. This allows end users of your program or activity to benefit from continuous improvement, and ensures the evidence base can be used and expanded to help similar programs improve their outcomes.

Evaluation findings should be transparent by default unless there are appropriate reasons for not releasing information publicly. This supports accountability, continuous improvement, and helps to embed a culture of evaluation across the Commonwealth. 
 

How should my entity use the results of the evaluation?

  • How will the findings/recommendations from the evaluation be implemented (this should be planned for and determined at the beginning of the evaluation)?

  • Who should I engage to ensure successful implementation?

How does my evaluation support learning across the Commonwealth?

  • How can I disseminate the findings from the evaluation?

  • What other entities could be interested in how I went about the evaluation and what I found?

  • Are there professional groups and networks I can tap into to increase learnings across the Commonwealth?

Does my evaluation help continuous improvement of other programs and activities in my entity?

  • How does the evaluation support my entity to achieve its purpose? 

  • How do I ensure that the evaluation is used as evidence in our Annual Performance Statements?

  • How do I get executive support for the evaluation and the implementation of the results?

Review evaluation objectives and implement improvements

Once you have delivered a well planned and executed evaluation, it is important to ensure the results are used and the learnings are disseminated.

What are the 3 rapid evaluation questions?

Prioritise the improvements: Often the evaluation will have a number of findings/recommendations.  In order to maximise impact, it’s important to prioritise those recommendations that can be implemented immediately, and those that can wait. 

Develop an implementation plan that is signed off by the executive in the entity: Developing an implementation plan is a good way to carefully plan the changes needed in the program or activity within any resource constraints.

This plan would involve:

  • prioritising activities most likely to lead to changes for the majority of stakeholders and/or end users
  • identifying activities that are not leading to the desired outcomes so you can stop or change them
  • identifying whether anyone is missing out that should have been a beneficiary of the program or activity.  Are new services, programs or activities needed?
  • reviewing the delivery method of the program or activity.

Have a communications plan: A Communication Plan outlines the strategies that will be used to communicate the results of the evaluation. The plan needs to describe which results will be communicated, how they will be communicated, and to whom they will be communicated. It is important to consider the different aspects and techniques for discussing the evaluation results with stakeholders and a wider audience to support continuous learning across the Commonwealth.[1]

Monitor the implementation of evaluation recommendations: Program managers are responsible for implementing the agreed recommendations of their program or activity. 

Share the learnings: The benefits of properly planned and executed evaluations are considerable, for the government, the Commonwealth and, importantly, for Australians – they generate rich, evidence-based insights that help guide the allocation of public resources, improve the design and implementation of programs, and deliver better services. Sharing the tips and tricks you have learned through your evaluation across the Commonwealth will help to build evaluation capability. It also allows the evidence base to be used and expanded to help similar programs and activities to improve their delivery and outcomes.

If you have an evaluation case study that you would like to share, please contact us at .

-- NAVIGATING THE EVALUATION TOOLKIT --

Go to Evaluation in the Commonwealth
Return to How to evaluate?
Templates, tools and additional resources


Page 4

Whenever you design, run or evaluate a Commonwealth program or activity, it always occurs in a particular context. It is important to understand your entity's operating context, purposes, key activities and current performance framework – outlined in your Corporate Plan – in order to decide:

  • if and when an evaluation should be carried out
  • if the use of evaluation methods and tools could improve the quality and robustness of your existing performance information.

It is important to consider how the objectives of a specific program or activity contribute to your entity achieving its purpose/s, and whether you have all the performance information you need to assess how well those objectives are being met.
 

What factors in my operating context need to be considered?

  • What are the purposes of your entity? What are the objectives of your program or activity?

  • What factors (environmental, risk, capability, and partnerships) must be managed to achieve your purpose/s?

  • How does your program or activity contribute to achieving your entity’s purpose/s?

  • How do you currently forecast, measure and assess the performance of your program or activity against its objectives?

  • Has an evaluation been included in a new policy proposal, or in your entity’s corporate plan?

  • Is there a government direction or legislative requirement to do an evaluation?

  • What is the level of support and capability for doing an evaluation in your entity?

  • Does your entity have an evaluation strategy or an evaluation unit to guide you?

How can I identify and prioritise what should be evaluated?

  • What matters most to the people that matter?

  • Where is the activity or program in the policy development cycle?

  • Can an evaluation support, complement or enhance your regular performance monitoring and reporting activities?

  • Can evaluation methods and tools be used to improve the quality of reporting in corporate plans and annual performance statements?

  • Note all Commonwealth entities are required to forecast, measure, assess and report on their key activities throughout the reporting cycle – where would the proposed evaluation fit in?

What is the main purpose of my evaluation, who is the audience and how will the evaluation be managed?

Understanding your operating context

Understanding your entity's current operating context, and where a specific program or activity fits in, is critical for deciding what should be evaluated and when. 

Working with stakeholders can help to:

  • identify when evaluation methods and tools can be used to improve the quality of performance reporting published in corporate plans and annual performance statements
  • identify and prioritise evaluation topics
  • design effective, ethical and culturally appropriate evaluations that support an entity to achieve its purposes. 

Commonwealth entities are required to publish a discussion about their operating context in their corporate plan. This discussion should provide a clear understanding of how, individually and collectively, things like: capability levels; risk oversight and management; and cooperation with others contribute to an entity achieving its purposes over the four year period covered by each plan.

Reviewing your entity's corporate plan can help you develop an understanding of where a specific program or activity fits within your operating context, and how an evaluation might help to strengthen performance reporting and accountability within your entity. 

The performance information collected for your specific program or activity may be more detailed than the high-level performance information reported in your entity’s Corporate Plan or Portfolio Budget Statement. Ideally, you still should be able to identify which key activity it relates to and the contribution it makes towards meeting your entity's purpose/s.

Findings from past evaluations can also help to inform a more mature discussion about program performance and your entity’s operating context in future corporate plans. This ensures that there is continuous feedback loop where evaluation both informs, and is informed by, your entity's routine performance monitoring and reporting.

 

-- NAVIGATING THE EVALUATION TOOLKIT --

Go to Evaluation in the Commonwealth
Return to How to evaluate?
Templates, tools and additional resources