Where’s the Proof?

Supply chain executives and their clinical users/customers are bombarded with “clinical evidence” to support vendors’ claims about the efficacy of medical technologies. How reliable is it?

A team of HealthTrust experts agreed to field questions from The Journal of Healthcare Contracting about the “dos and don’ts” of evaluating clinical evidence, including whether to rely on studies conducted by manufacturers, how to counter supplier objections and hold them accountable, along with reliable resources to turn for assistance.

Their consensus: If clinical evidence is necessary to inform supply chain decisions, its evaluation should be a multidisciplinary exercise by people who know how to read studies, interpret statistics, understand trial design, and ask the right questions of researchers and practicing physicians. The review team must also unearth contextual information, such as product characteristics that are indispensable to physicians, the reimbursement intentions of managed care payers and the scope of overall clinical program requirements.

The roundtable participants were:

  • Michael Schlosser, MD, FAANS, MBA, chief medical officer of HealthTrust and a board-certified neurosurgeon. He previously served in leadership roles at TriStar Centennial Medical Center in Nashville, including chief of staff and chief of surgery. Dr. Schlosser also worked as medical director for Parallon Supply Chain Services, collaborating with the HCA Clinical Excellence team to enhance physician engagement with supply and purchasing decisions.
  • Lynn Tarkington, RN, BS, assistant vice president of physician and clinical services at HealthTrust. She previously served as assistant vice president of the clinical team for SourceTrust, HealthTrust’s medical device sourcing service, and prior to that in HCA’s corporate Quality Department leading the Clinical Cardiovascular Management Network.
  • Mark Dumond, assistant vice president of technology services for HealthTrust and leader of the Information Technology Advisory Board. His previous work focused on contracting and clinical operations for HealthTrust. Dumond has extensive hospital experience in both pediatric and adult environments.
  • Robin Cunningham, RN, MSN, a director for physician services at HealthTrust. A cardiovascular care specialist, she previously served in leadership positions with HCA for 25 years.
  • Jarad Garshnick, BBA, MBA, a director for physician services at HealthTrust. His background includes medical device sales for major spinal implant manufacturers, in addition to new business development and supply chain management for the 3M Company.

 

Journal of Healthcare Contracting: On the provider side, who should review and critique the reliability of clinical evidence?

Lynn Tarkington

Lynn Tarkington

Tarkington: The classes of clinical evidence, from randomized clinical trials to case studies, are fairly well defined in the world of medicine. However, the class of evidence alone doesn’t tell you its importance or relevance in medical decision-making. The review team at HealthTrust always seeks additional levels of evidence for a particular product and consults with an appropriate physician advisor to confirm the information – specifically, when the product falls in a physician preference or clinically sensitive category. If we’re looking at a product in the orthopedic category, for example, we would do some of the background research and then have an orthopedic physician or surgeon review the studies we find, to determine if the provided evidence is reliable and applicable.

 

Michael Schlosser

Michael Schlosser

JHC: Ideally, what should such evidence demonstrate?

Schlosser: I think ideally the evidence focuses on how the product is going to be used in the clinical situations end users are going to face. Unfortunately, even studies that are otherwise of good quality don’t always apply to real-world patients. They pertain to idealized situations – academic settings and highly selected patients – not how the product is going to function in a community hospital.

 

 

JHC: In order to market their devices in the United States, developers of medical devices need 510(k) or PMA clearance from the FDA. Should the hospital or IDN consider the clinical evidence submitted to the FDA by the vendor in order to get marketing clearance sufficient to bring a product into the IDN? Why or why not?

Mark Dumond

Mark Dumond

Dumond: When it comes to PMA approvals, the FDA requires clinical evidence. The PMA process applies to life-sustaining devices such as implantable cardioverter defibrillators, pacemakers and drug-eluting stents. The majority of other devices – from toothbrushes and condoms to hips, knees and cervical implants – go through the 510(k) process to establish substantial equivalence to something that is already on the market. Usually, that only requires bench testing. About 300 items go through the 510(k) process per month, which takes up to 90 days and $6,000 to complete. That compares to only a handful of PMAs, taking up to five years and potentially millions of dollars in clinical trials to substantiate. So depending on the pathway, the evidence submitted to the FDA may or may not answer an IDN’s question about how the technology should be deployed in its facilities.

The difficulty for hospitals is staying on top of all of the supplements made to those original PMAs. We recently worked on a white paper about neurostimulation devices, and two of the big manufacturers each had over 200 PMA supplements because something had changed since the original PMA submission. Supplements get issued for a variety of updates such as a new indication, changes in packaging or sterilization, or relocation of manufacturing. Each of these changes could raise new questions about the safety, efficacy or cost-effectiveness of the device.

Robin Cunningham

Robin Cunningham

Tarkington: It’s a lot harder to make side-by-side comparisons of 510(k)-cleared devices, because oftentimes there isn’t any clinical evidence. In those cases, we really have to hunt for case studies and may only find a journal report or two on 25 patients and we’re back to questioning the value of the evidence. In the absence of anything else, that’s what we’re forced to go on.

Cunningham: And many of the available clinical studies for 510(k) approvals include only animal testing. It’s hard to assess a product that’s going to be used on humans when all you’ve got from a clinical standpoint is testing on mice.

 

 

Jarad Garshnick

Jarad Garshnick

JHC: What’s wrong with studies conducted by the manufacturer, or on behalf of a manufacturer by a third party?

Garshnick: I think most studies done by manufacturers are subjective and don’t show true head-to-head comparisons with other products in the same category, nor do they apply to real patients in the field. It’s a struggle when the supplier is asking for premium pricing and yet can’t demonstrate that its product is superior to like products or an earlier iteration of the same technology.

Tarkington: For some of the PMA products, manufacturers will sometimes provide grants for studies through a third party. In cardiology, for example, the manufacturer of a new stent or pacemaker may also be financing the study. Those are very large, randomized controlled trials and there’s a lot of rigor around them so the supplier can’t influence the results.

Schlosser: The difference with 510(k) products, where suppliers are conducting post-market studies, is that there’s no oversight of the study design, as there is with PMAs. If you’re sponsoring a PMA as a supplier, the FDA reviews the trial design to ensure there aren’t biases. But when a vendor funds a 510(k) study, no one sees it until it appears in a publication. Suppliers are very good at influencing results by selecting the physicians who they know have good outcomes to participate in these post-marketing studies.

Tarkington: In their marketing, suppliers will claim their product is “new and better.” But by FDA definition, a 510(k)-cleared product didn’t make a substantial enough change to require a PMA. So if they make this claim of efficacy and superiority, it’s not necessarily for a proven, scientific reason.

The federal government has a website – clinicaltrials.gov – housing all PMA trials and some of the 510(k) filings. The government has reviewed the design of all of those trials. It’s a good place to look to see if a product has had a trial.

Garshnick: HealthTrust established a new Physician Advisors Program earlier this year to engage clinical experts around the country to analyze evidence-based data on medical device utilization and engage in discussions around specific product and service line categories. Their role includes providing feedback on new and future technology and treatment options, and identifying promising research opportunities. The common goal of physicians in the program is to improve the quality and efficiency of patient care by developing a clinical foundation for purchasing decisions.

 

JHC: Developers of innovative technologies sometimes argue that because their technology is new, they haven’t been able to document a track record on its effectiveness. How does the supply chain executive handle such an objection?

Schlosser: A culture change needs to occur around the way we look at new technology. In the past, we’d rush to bring new technology to patients and potentially adapt the way we used it over time as we learned more. That was an expensive process, because the new technology always had a higher price and wasn’t necessarily delivering better outcomes. We had to wait for time to elapse before we knew if the outcomes were going to be better.

We want to help flip that process around, so vendors are expected to invest the time, money and research to document clinical outcomes before their product is released into a live situation. A screening process should be set up by value analysis teams to look at the evidence critically and determine if it suggests that a product adds value and deserves to be brought into everyday use, or if it is still in the research phase and needs to be kept out until there’s better evidence behind it. HealthTrust can provide value analysis teams with support, evidence and interpretation of that evidence.

Cunningham: With backing from its professional societies, cardiology has for many years done a very good job of ensuring products come to market backed by large randomized, controlled and peer-reviewed clinical trials.

Tarkington: Even after these randomized controlled trials, the thought leaders will challenge the evidence, and I think that’s good for patients in the long run.

Dumond: When PMAs come out, value analysis teams at the facility level need to negotiate with their managed care providers to ensure the device is covered. If not, and the product is very costly, it could really hurt them operationally. On the Medicare side, the federal government will occasionally make an additional pass-through payment for a year or two after a device hits the market if the cost of using it is substantially higher for providers than the predecessor treatment. Sometimes suppliers are able to skip the FDA panel approval process entirely, so teams lose the usual six-week window until full FDA approval to meet with their managed care providers and learn if there will be a pass-through payment.

Tarkington: On PMA approvals, we prepare 12 to 15 evidence reports annually covering the economic impact. Although we won’t know the price of a new product until the FDA approval happens, vendors may tell us if they think it will be priced the same or higher than the product it’s replacing. Then, we can research if they’ve talked to the government about additional payment or at least what DRG the product is going to fall into, so we can make a good guess. Our review might indicate a device will cost $1,000 more, for example, but it’s going to fall into the same DRG and get a $600 new tech add-on.

Schlosser: At HealthTrust, it’s the full-time job of our clinical research team to look at clinical evidence and engage physicians, and then assemble that information in a way that’s usable by our membership. They’re not only looking at PMA and 510(k) approvals; they’re also reviewing product recalls and other issues that impact patient care, in some cases working directly with suppliers and contract managers.

 

JHC: Do you typically have market share information to know if a product is going to cause a certain amount of disruption?

Tarkington: We were recently researching new stents, and it would have been helpful to know about their market share in Europe, where the products were already commercially approved, to help us make predictions about usage here. However, it’s very hard to get that information, so often all we have to go on is what the vendor tells us. In any case, “market share” in Europe means something different than it does in the States.

Garshnick: In Europe, you have a year to “test” a new product in the market, draft the outcomes and present those findings to regulators. The process for doing that differs country to country.

Schlosser: A vendor will often say a product has been on the market in Europe for a year and had great outcomes, but buyers don’t realize that is hasn’t actually gone through an approval process. So, unless you understand the nuances of how products get approved in Europe, it’s not really useful information.

Tarkington: I had a stent supplier tell me it has 30 percent market share in Europe, but it couldn’t break that out by country. So I finally asked, “Do you have approval in China?” “No.” “Japan?” No.” Those are two big markets for stents. I still don’t know the denominator for that 30 percent.

Dumond: In terms of approvals, we’re usually a year behind Europe, and Japan is a year behind us.

 

JHC: To what extent can the hospital or IDN hold the vendor accountable for the performance of the device/equipment after it is in use in the healthcare facility? What if the hospital’s experience does not match that of the clinical evidence presented by the vendor?

Schlosser: This does happen and hospitals absolutely should hold vendors accountable for device performance and claims made when the device was brought to market. However, to do that, they have to track outcomes. Fortunately, that is becoming more of a normal part of the day-to-day business of medicine. If a device is under-performing, hospitals should consider removing it from use until data is available to understand the evidence.

Garshnick: With many medical devices, you don’t really understand outcomes until a year or two after they’ve been implanted. And you may never have the full picture because patients who feel better probably aren’t going to go back to their physician to talk about it and physicians don’t have time to track them down for follow-up. So outcomes can be difficult to track and manage.

Dumond: You can also check the government’s MAUDE (Manufacturer and User Facility Device Experience) database for any adverse outcomes associated with devices and equipment. Issues are reported by facilities, clinicians, vendors – even patients; they include anything from skin reddening after a CT scan to death after having heart valve surgery. The FDA monitors MAUDE, as do we, looking for patterns. It’s all public knowledge, but it’s very hard to tie the information together due to patient privacy protections. You may find 500 listed items for a product, but they represent only 200 actual patients. When a problem is detected, the vendor may initiate a recall and send out a letter to the affected physicians. For more serious “Class I” issues, the FDA requires the vendor immediately pull the product from the market. In my experience, if you keep your eyes open and know what to look for, it’s possible to stay ahead of the recalls – voluntary or otherwise.

 

JHC: To what extent can the IDN supply chain executive rely on others to “vet” the clinical evidence presented to him/her – GPO, professional associations (e.g., physician, lab, nursing, surgeon, etc.)?

Tarkington: HealthTrust has a team dedicated to doing that now. We’re a resource for member hospitals that have questions or want us to further research evidence presented to them by a supplier. As for professional associations, that’s where we frequently go to find or vet research. They might have guidelines, white papers or consensus statements about a particular technology we’re investigating.

Schlosser: My team evaluates different classes of evidence, and then our physician advisors tell us what it means in their actual day-to-day practice.

Garshnick: Even from a physician’s standpoint, clinically “better” is often subjective, because it is in reality a description of a feature, function or benefit of an instrument or tool. Physicians have preferred ways of using products and assign value to specific attributes. That’s why we consult with physician advisors from around the country, asking them what it is about certain products that make them irreplaceable and, conversely, what is not an important differentiating characteristic. That’s more than the clinical evidence alone would tell us.

Dumond: It’s also noteworthy that when we attend national meetings, we don’t just go to the vendor fair. We sit in on presentations about late-breaking trials, where we hear comments from participating physicians that are many times very different than what vendors are saying.

Cunningham: Many of the major suppliers can also be very helpful in that they release information about hospital and physician reimbursement, DRGs and CPT codes at the same time they release a new product. It’s a starting point for further digging. Understandably, hospital operators want to know if there will be a return on their investment for high-cost products such as TAVR (transcatheter aortic valve replacement).

When the TAVR procedure was introduced, many hospitals jumped on the bandwagon without considering the financial repercussions. The cost of resources and products to perform a TAVR procedure far exceeds the reimbursement. When taking this into account, it is important to consider the competitive market, demographic need for the procedure and financial impact of instituting a TAVR program. According to our calculations, the cost of redesigning the OR alone is $5 to $7 million. So sometimes we have to evaluate products in the context of the larger program of which they’re a part. It’s the only way to understand the total cost impact.

safe online pharmacy for viagra cheap kamagra oral jelly online