It’s a common practice in our industry (at least from my vantage point) to hear talk of the screen failure “rate” of any given study.
But just what is this mysterious “rate” of which we speak? That it is a rate, would imply that we should be observing this as a function of the number of screen failures over time – like we tend to do with the screening and randomization rates of a study.
These ubiquitous rates make sense to us because we can intuitively understand from them that sites may be screening and randomizing patients in a way that is consistent with our expectations for the date of study completion.
A screen fail “rate,” however, just makes less sense. Instances where we must report or make sense of a fact that we may be screen failing two patients per site per month are few and far between. Is this good or bad? It only matters in context with whether the randomization rate is still sufficient to ensure our last-patient-in date is met.
The more commonly used and more commonly useful metric is of course the screen failure percentage. Not only does this allow us to determine how fruitful the site’s screening efforts of sites are, it allows us to actively predict from the number of patients who are actively still in screening phase, how many we may reasonably expect to go on to randomize. This predictive ability ensures that we don’t over enroll studies by wide margins because we can set “end-of-screening” dates based on these predictions.
Maybe I’m making mountains out of molehills, but I think perhaps it’s time to retool our vernacular to more easily use the term “screen failure percentage” as opposed to “screen failure rate.” At least it will make my job in data reporting QC a little easier!