One of the most astonishing features of the ANAO report, ‘The Implementation and Performance of the Cashless Debit Card Trial,’ apart from the conclusion that the trial had failed to prove the policy had reduced social harms, has to be the fact that no KPIs (key performance indicators) or measurement targets were put in place by the department to measure key trial objective criteria – including social harms!
Reviewing the section of the report titled ‘Assessment of evaluation KPIs against the criteria of relevant, reliable and complete’ – you come to see just how much of ‘Clayton’s trial’ this entire process has been. Nothing of relevant substance was ever actually recorded or measured.
The ANAO report focused on the first 12 months of the Cashless Debit Card trial and once you step past the department appeasing fluff of the first section, in their KPI evaluation the ANAO report found:
“3.53 Some data was not collected; and the data was inconsistent and not fit for purpose (e.g. some covered wider geographical areas). (see paragraph 3.31)”
So, vital data was not collected (or in the case of DV statistics, simply ignored); data collected was different between the sites and, as has been a pattern the LNP are fond of, regions not included in the CDCT roll out area were included in Orima evaluation:
“3.52 Anecdotal information reported to the Minister suggested an increase in school attendance, but ANAO analysis of state data available to Social Services showed that attendance was relatively stable for non-indigenous students but it had declined by 1.7 per cent for indigenous students, after the implementation of the trial compared to the same period (between May to August) in 2015.”
So rather than the spin, school attendance actually went down during the trial and evaluation period, and to our awareness, this was most likely due to the fact public buses do not accept the Indue card.
“3.51 The Minister was advised that there was a decrease in the total number of St John Ambulance call-outs in September 2016 compared to September 2015. Accounting for seasonality in the data, ANAO found, in analysing the data over a longer period, there was a 17 per cent increase in call-outs from April to October 2016 when compared to the previous year.”
This means that Government and Mindaroo/Generation One representatives have misrepresented the St Johns ambulance call out data to the parliament, in senate hearings and to the media and public several times.
“KPI was put in place to measure the success of support services that were part of the trial.”
This is astounding given the pretext for the trial was to prove the success/failure of the policy to reduce social harms. Outcomes on the ground recorded by these services, as opposed to collected curbside ‘perceptions’, are the only way to objectively measure trial objectives.
Their quantifiable outcomes establish the basis for verifiable results.
“No baseline set and no specific targets set to measure frequency/volume of gambling and associated problems.”
This is one of the four primary objectives for the card, yet it was never measured.
A reminder here that Keith Pitt did not provide any evidentiary data to Senate or anywhere else to prove his claim Centrelink recipients were responsible for unbelievably high rates of gambling in his electorate – gambling rates that have increased since the CDC trial began there.
“No baseline set collected and no specific targets were set to measure frequency of use/volume consumed of drugs & alcohol;
- No specific targets were set to measure the alcohol consumed by participants per week;
- % of participants who say they have used non-prescription drugs in the last week;
- number of times per week spend more than $50 a day on drugs not prescribed by a doctor;
- number of times per week have six or more drinks of alcohol at one time (binge drinking);
- and % of participants, family members and general community members reporting a decrease in drinking of alcohol in the community since commencement of Trial.”
There was no collection of official baseline statistics on any of the all important ‘three evils’ (their words, not ours); gambling, drug addiction, alcohol. So no ‘what it was like‘.
No actual quantitative data was ever collected prior to trial commencement and then, no specific goals/targets were ever put into the trial monitoring, management or legislation to measure if numbers/rates had dropped as a result of the trial at all. So, no ‘what happened’ and no verifiable picture of ‘what it’s like now‘ either.
The three basic core principles for conducting and determining the result of a ‘trial’ or any legitimate goal, have been completely ignored.
To our mind, the only reason trial objectives would take no priority, is if they were never objectives at all.
Given no formal KPIs or targets were ever put in place to measure this critically relevant data for comparative analysis, how the hell were trials allowed to continue?
This is a very important issue right now, as this exact same failure has been repeated in Goldfields trial zone, where we see – yet again – that none of these measurements were made or appeared in the so-called ‘baseline study’ released by the government in February 2019.
Adelaide University’s ‘baseline study’ (see above link) not only did not collect this data that is directly related to the primary objectives of the trial, they began their ‘baseline’ study after trials had begun and predominantly recorded perceptions not facts (anecdotal evidence) interviewing only a minuscule number of people actually subject to the policy by comparison to he number ‘stakeholder groups’ the record.
We need not remind anyone by now, that ‘stakeholder groups’ are better known as paid and preferentially appointed (by the Dept) business and community groups. This includes those service groups who’s funding is existentially reliant on their participation – whose staff are gagged from reporting or giving opinions on government matters.
“No plan in place to continue to evaluate the CDC to test its roll-out in other settings.”
This inclusion is very important as it demonstrates every report or evaluation decision to come since the Orima report has been devised ‘on the run‘ and as with Adelaide Uni’s Evaluation – has been compelled by rising political and public interest and awareness. Good work everyone. Keep the pressure for accountability on!
This is also why it is so important to keep the current legislative requirement for independent review and trial evaluations in place and not let that clause be lost in the Transition Bill 2019. Government must be compelled to provide more data, not less, and to oversee and manage “trials” more effectively and legitimately.
Please note there is still no official reporting, evaluation requirement or even basic record of compulsory trial participant negative impacts and Administrative Appeals Tribunal redress rules apply differently to people on income management.
“3.53 Social Services indicated to the ANAO that the monitoring statistics provided inconclusive results due to the short-time frame of the trial. The ANAO found that other factors also impacted on the performance results including: some data was not collected; and the data was inconsistent and not fit for purpose (e.g. some covered wider geographical areas). (see paragraph 3.31).”
So again, even where some targets were set down on paper, the Orima report data either – included data from areas outside of trial regions; was not collected at all, or the data was inconsistent when it was collected.
“Key areas of the CDCT relating to the administrative and operational aspects of the trial such as the Social Services call centre, well being exemptions, community visits, levels of cash available in the community and staff training were not measured with KPIs. This reduced the ability of Social Services to drive improvement in the operational aspects of the CDC.”
It was truly astounding to learn that the #1 objective listed in the Social Security (admin) Act 1999, ‘reducing levels of cash in the community’ was not even measured!
“There is a lack of ongoing KPIs post ORIMA’s evaluation to measure on the CDC’s wider goal of ‘encouraging socially responsible behaviour’ per the trial legislation and the other goals of the trial over the long term.”
We can see no ‘new’ KPI’s or measurement targets in place in publicly available information, even now. If they do exist, they are being concealed by the department and may or may not become available at some time. The concern here is that KPI’s could simply be added at any time to any report, to reflect whatever data is produced.
“There were no efficiency focused KPIs to drive the measurement of the CDC’s efficiency, particularly against existing welfare quarantining measures. This means that a key intent of the CDC as noted in the Forrest Review to drive down the cost and resources involved in welfare quarantining was not measured by a KPI.”
The absence of this data means that the minister and department can claim what they like in the media and on their websites. Without this particular data set, they can’t prove a word.
It is clear from the ANAO report, that trial objectives are being ignored and the department is not interested in producing any data that would permit even rudimentary comparative analysis between the CDC trials and other compulsory income management measures.
This being the case, there may never be a way to prove the CDC is ever legitimately a success, comparatively or otherwise. Proof of policy failure already abounds, though it remains unrecorded and the voices of those impacted, are denied their agency and representation.
Aside from government being in a hurry to abdicate its responsibility to the Australian people and obsessively consumed with offloading Centreilnk services to the private sector as fast as possible, not one aspect of this “trial” has integrity when the legislated trial objectives are prioritised and the DATA actually analysed.
* * * * * * *
Please note that the italicised text block quotes are from the ANAO report. Again, here is the link to the report.
If you are interested to read a brief expert analysis of the Orima report including the revelation that for 77% of people there was no positive outcome from the trial period, please see these articles by Dr Janet Hunt, ANU:
- The Cashless Debit Card Trial Evaluation: A Short Review
- The Cashless Debit Card Evaluation: does it really prove success?
Dr Hunt has been very vocal with the news that the Orima report was not the ‘proof of success’ the minister claimed it to be, speaking in media and in the Senate on several occasions; yet somehow the Murdoch press and so the majority of the Australian public simply failed to notice.
What the ANAO report does record clearly, is that the department put in place all the right words, and then did absolutely nothing to action them.
Sound familiar?
We will go into ‘procurement’ issues at a later point.
By TheSayNoSeven
[textblock style=”7″]
Like what we do at The AIMN?
You’ll like it even more knowing that your donation will help us to keep up the good fight.
Chuck in a few bucks and see just how far it goes!
[/textblock]