
Home » Analyze Data Validity to Identify Impact of COVID-19 Disruptions, Expert Says
Analyze Data Validity to Identify Impact of COVID-19 Disruptions, Expert Says

October 26, 2020

Sites and sponsors should conduct certain analyses with their trial data during COVID-19 to determine empirically whether their data capture methods remain valid because of the impact the pandemic is having on how patients are reporting outcomes.
Proper surveillance can alert sponsors and sites to when interventions are needed to remediate inconsistencies in symptom reporting, while statistical analysis can be used to judge whether certain types of data can be integrated with the rest of a dataset, says Nathaniel Katz, chief science officer at WCG Analgesic Solutions.
“There are already central statistics surveillance systems available that can accomplish these goals. Sponsors can examine their data to ensure that data that has gotten garbled due to measurement error does not undermine conclusions from accurate data or that shifts in how data are collected do not alter conclusions of efficacy,” he said.
Lifestyle disruptions brought on or exacerbated by the pandemic — including mood changes, employment issues, sleep disturbance, changes in physical activity and isolation — can skew patient-reported outcomes that rely on scales with subjective endpoints, such as a pain scale, says Katz. He said psychological disruptions and expectations of benefit were also factors that could cause variability in clinical trials, obscuring treatment effects.
“You can imagine a lot of ways that the COVID-19 epidemic could cause perturbations in the machinery of a clinical trial,” Katz said. “Changing the modality of administration of a scale, or changing the setting in which that is administered, can change the psychometric validity of that scale, but not necessarily.”
Katz went on to say that “certain aspects of the way patients respond on that scale may shift — which may alter your ability to combine those kinds of data. It may impact the interpretation of your clinical trial and may impact regulators’ view of the trustworthiness of your data.”
But there are analytical tools available to address the issue, Katz says. “If you talk to a good psychometrician or someone from your outcomes measurement group, they can guide you in how to perform those analyses,” he adds.
That in turn would give sponsors and sites evidence to back up future questions posed by regulators or others into whether they gathered data in a valid way and for interpreting any observed shift in the results. “I would strongly encourage everyone to do [analyses] if you are involved in a clinical trial where there is this back and forth shift of how you’re collecting your data,” he said.
Clinical trial protocols should also include a section on reliability, Katz said. The protocols should identify the critical processes that could impact the study results; the procedures that would be used during the study to monitor the reliability of critical processes; and what corrective actions would be taken to remediate any performance issues. “This would represent a shift in how this issue is given attention in the way we do clinical trials. We do have tools at our disposal to improve reliability, and those tools are more important than ever during times of societal perturbation like COVID-19, where there could be even more of an impact on critical processes. That impact is likely to vary a lot from one research site to another.”
Katz acknowledges that it is unclear whether clinical trials specifically focused on a COVID-19 vaccine and/or therapeutics for related diseases are more susceptible to unreliable data reporting. Any clinical trials that rely on subjective outcomes “will be trickier during this time of social upheaval, requiring greater attention to patient and clinician training, central statistical monitoring and interventions to remediate data quality issues...
“We’ve monitored plenty of things that have not changed during the COVID epidemic, so I’m not trying to give you the impression that everything has changed,” Katz said. “But what I am trying to tell you is that things can change, and unless you’re monitoring your critical processes, you’re not going to know until after your study is over and your data is analyzed and you see your [probability value].”
Related Directories
Upcoming Events
-
12Apr
-
25Apr
-
26Apr
-
27Apr
-
17May
-
21May