CDRH AUDIT OF CLINICAL DATA IN DEVICE APPLICATIONS
This article was originally published in The Gray Sheet
CDRH AUDIT OF CLINICAL DATA IN DEVICE APPLICATIONS is being undertaken to determine if most device submissions contain the type of data weaknesses identified by the Committee for Clinical Review in a report on 29 submissions that was released March 4. In order to determine whether there are widespread problems with device application data, the Center for Devices and Radiological Health plans to review a "broader" number of applications than were evaluated by the committee, which was chaired by Robert Temple, director of the office of drug evaluation in FDA's Center for Drug Evaluation and Research, and composed primarily of MDs from the drug center. In the report, the Temple group states that although the findings are based on a "small sample" of submissions, the prevalence of "major" deficiencies in trial design and conduct observed during its review may be indicative of a trend in device applications. "Certain patterns of deficiencies in the design, conduct, and analysis of clinical studies were present in enough of the  applications to suggest that these deficiencies represent a common problem, one regularly encountered by CDRH" when reviewing submissions, the committee asserts. The device submissions audited by the group included premarket approval applications, 510(k)s and investigational device exemption applications. FDA Commissioner David Kessler formed the Temple group in April 1992. The group was asked to make "recommendations on improving the clinical review process" for devices and to provide "clinical support" to the device center while it recruits additional clinical reviewers. The applications reviewed by the committee included submissions for two implantable defibrillators, a laser angioplasty device, a urinary incontinence device, a home-use ovulation test, two ophthalmic lasers, an ophthalmic product designed to decrease intraocular pressure, an injectable substance used during cataract surgery and a soft contact lens cleaning solution system. The group's report identifies three major areas of weakness in the submissions audited: "inadequacies of [trial] design/absence of clear hypothesis" -- "insufficient consideration of the value of a randomized control group" -- and "deficiencies in the conduct, reporting, and analysis of trials." In "most" of the applications reviewed, "it appeared that clinical trials, even large ones, were carried out with little planning or attention to the purpose of the study," and "a number of basic principles of experimental design" were "ignored," the committee maintains. "Many of the deficiencies encountered were major omissions, not niceties or refinements." The group adds that the problems "were sufficiently serious to impede the agency's ability to make the necessary judgments about the safety and effectiveness, or substantial equivalence, of the devices covered by the applications." The group says that inattention to "basic clinical study design" was the "fundamental problem" with most device applications. "Many of the studies were under-designed, even undesigned, representing little more than a report on a collection of experiences." The committee comments that this underlying flaw led to seven "recurrent problems" with the trials. The Temple report describes several examples of deficient applications to illustrate study design weaknesses. For example, in a study evaluating the material injected into the eye during cataract surgery to maintain the anterior chamber and protect the corneal endothelium, "the critical endpoints and success criteria (endothelial cell count and post-operative intraocular pressure) were poorly specified," the committee reports. Because of the lack of an "explicit hypothesis," the sponsor did not make an assessment of "the number of patients needed to determine the effectiveness of the device with respect to critical endpoints," the group continues. "While the total number of patients given the device was probably adequate, only an arbitrarily chosen, and too small, subset of patients was selected for evaluation of the critical endpoints, so that effectiveness could not be properly evaluated," the committee concludes. FDA's recent increased emphasis on the use of randomized, controlled clinical trials to demonstrate device safety and efficacy has previewed the Temple group's second major finding -- that firms fail to consider sufficiently the use of a randomized control. According to the report, a randomized control "was not used even where its usefulness would have been obvious, e.g., where standard therapy was quite effective and it would have been very important to compare new and old devices precisely" and "where the natural history of the untreated disease was not clear and no historical series was identified." Citing an example, the FDAers point out that "new laser angioplasty devices were not compared to established devices (balloon angioplasty devices) in randomized, controlled trials even though the intent of the laser devices is to decrease problems associated with the older balloon angioplasty devices." The committee adds that "historically controlled studies are very difficult to use in this situation." One FDA advisory panel chairperson who was asked to review the Temple findings expressed a different opinion on randomized controls, according to the report. Gabriel Gregoratos, MD, University of California at Davis, said that "randomization and blinding [in clinical trials] are not always practical for many, if not most, cardiovascular devices." The committee responds that it "did not intend to suggest that randomized, blinded trials are the only acceptable study design or could always be implemented." Rather, the group would like to see consideration of the range of possible study designs for evaluating new devices and selection of the type of study that is "both adequate to the task and feasible." A laser studied for glaucoma treatment was one device cited by the Temple group as an example of the third major type of study weakness -- inadequate or incomplete reporting and analysis of test data. "Data in different parts of the application were not consistent, and there seemed little attention to accuracy," the committee asserts. Submissions for two identified types of devices demonstrated another problem falling under this category of weakness -- the failure to report all adverse reactions. The committee maintains that clinical data weaknesses are a factor contributing to the growing delays in review of applications. The problems "can require CDRH reviewers to spend tremendous resources bringing applications to an approvable state, necessitating repeated requests for additional data and prolonging the review period." In most cases, better studies could be conducted "without a significant increase in resources or time to accumulate data," the committee concludes. Improved study design would benefit the sponsor as well as FDA, the group adds, by speeding review of the application and providing more information on the utility and performance of the device being studied. In a March 5 statement on the report, the Health Industry Manufacturers Association praises the Temple report as "a useful step in the continuing efforts by industry and FDA to fully address product approval problems." However, the trade association takes issue with several of the conclusions made by the Temple group. For example, the group "misinterprets the standards established by Congress for review of medical devices and leaves the wrong impression about the clinical standards that devices must meet," HIMA says. "The standard for approving breakthrough technologies is whether there is reasonable assurance of the product's safety and effectiveness for the conditions stated in the product's labeling." The assurance "does not necessarily require a controlled clinical trial, but must be based on valid scientific evidence," which "does not require comparative safety and efficacy information as suggested by the FDA report." This argument was outlined by HIMA in a position paper submitted to FDA that maintained that FDA's reliance on controlled trials for devices constitutes an improper application of a "drug model" ("The Gray Sheet" Nov. 23, I&W-3). HIMA's March 5 comments also state that the report's focus on establishment of substantial equivalence through clinical trials may be misleading. "Congress mandated that FDA could determine substantial equivalence by various methods including a comparison of specifications, not necessarily by comparing clinical information as the report seems to assert." HIMA also expresses concern about the report's overall emphasis on clinical trials. HIMA argues that safety and effectiveness of devices often depend "more on an evaluation of preclinical testing," such as material safety evaluations, "rather than on the results of clinical tests."
Sign in to continue reading.
New to Medtech Insight?
Start a free trial today!
Register for our free email digests: