The latest alarm-bell over the reporting of animal efficacy data was rung by a team of researchers at Hannover Medical School in Germany and McGill University in Canada. For a systematic analysis, the researchers gained access to 109 investigator brochures – documents used by regulators and review boards to assess the potential efficacy of experimental drugs before they move to human trials – that were reviewed by three institutional review boards in Germany between 2010 and 2016.
The study’s findings, which were published in PLOS Biology on 5 April, detailed some concerning gaps in the reporting of preclinical animal studies. Of the more than 700 animal studies presented in the investigator brochures reviewed by the German medical boards, less than 5% referenced the use of randomisation, blinded outcome assessment or sample size calculations – all measures that are commonly used to validate efficacy results and minimise the risk of bias. This doesn’t necessarily mean that these measures weren’t employed, but if they were, their use wasn’t included in the investigator brochures.
“Whether the studies were that bad, whether they really did not include all these methodological steps, we don’t know,” says Dr Daniel Strech, professor of bioethics and Hannover Medical School and senior author of the study. “But if you are one of those who have to make a risk-benefit assessment based on the animal studies, you currently more or less cannot do it.”
The lack of these measures is compounded by another of the study’s findings: only 11% of the animal studies investigated made reference to a publication in a peer-reviewed journal, meaning review boards and regulators assessing these brochures would also have lacked the validation of a separate review on data quality by independent academics. What’s more, all but 6% of the animal studies reported positive outcomes. While this isn’t initially surprising – after all, why would a clinical trial be proposed if the preclinical data was negative? – Strech emphasises the statistical oddity of having so little negative data, and stresses his concern.
"You need negative animal studies to better understand where the window of opportunity for your drug is."
The lack of these measures is compounded by another of the study’s findings: only 11% of the animal studies investigated made reference to a publication in a peer-reviewed journal, meaning review boards and regulators assessing these brochures would also have lacked the validation of a separate review on data quality by independent academics. What’s more, all but 6% of the animal studies reported positive outcomes. While this isn’t initially surprising – after all, why would a clinical trial be proposed if the preclinical data was negative? – Strech emphasises the statistical oddity of having so little negative data, and stresses his concern.
“These animal studies had very low sample sizes,” he says. “On average we found that in one animal study there were about eight animals that they tested for the intervention, and eight animals in the control group. If you have these low sample sizes, just by chance, from time to time, you will have a negative study.
“The fact that they do not show up speaks a little bit in the direction that some selection of animal studies has taken place when they decided what studies to present in the investigator brochures. You need negative animal studies to better understand where the window of opportunity for your drug is. You need, for example, animal studies that demonstrate when the dosing scheme or the timing for the intervention is best, and when it becomes negative. If there’s nothing negative in the preclinical evidence, then you just lack any of these demarcations.”
Given the cross-border nature of multi-centre trials and the fact that many preclinical studies are bankrolled by pharma giants with global reach, Strech believes that this is an issue that is not limited to Germany, and is likely to be happening across regulatory systems. “The French, the German, the British regulatory agencies meet very often, so it would be strange if the data looked like this in Germany and the regulatory bodies accept this despite the fact that in other European countries they see a completely different picture.”