Jane challenged the need to produce evidence of benefits for ICT innovations. She argued that the call for “hard” evidence by policy-makers sets up unrealistic expectations. Situating quantitative data (especially the ‘gold standard’ randomised controlled trial, and cost-benefit analysis), as the necessary prerequisite for the demonstration of effectiveness, means many ICT innovations are doomed to fail. The recent move towards combining more “soft” aspects with numerical evidence is not the answer. In this positioning “soft” outcomes exert no significant influence other than to become a buttress; explaining why the statistics do not add up.
The case presented outlined Jane’s own pursuit of “evidence” with reportedly the world’s largest RCT of telehealthcare. The resulting findings, the robust quantitative data and the highly structured rules for producing this, largely failed to convince the policy makers who had requested it. For local stakeholders the researchers ‘robustness’ rendered much of their experiential knowledge redundant; appearing exempt from subjective judgments, local singularities and context.
Despite the research team’s ‘best practice’, they found a lack of perceived trustworthiness in the data, and a lack of engagement from the telehealthcare community with the “evidence” presented. This response suggests what counts as ‘valuable and actionable’ knowledge and ‘evidence of effectiveness’ and who is ‘knowledgeable’ within this domain is contested. Claims of evidence were newly situated and re-evaluated in response to changing policy agendas and organisational pressures.
Instead of demanding and then failing to show “effectiveness” perhaps we need to be asking a different set of questions and developing new ways of answering them.