Closer Examination

Given today’s economy and political climate, it’s no surprise that health technology assessment rests heavily on clinical and financial data.

“There’s a laser in my closet!” sounds like some sort of joke. But few administrators, department heads or supply chain executives would find it funny.

Expensive technology acquired in the heat of the moment, perhaps to appease an influential physician or to capture the public relations lead in a two-hospital town, can end up underutilized. In the closet, so to speak. (Sometimes, literally.)

With healthcare reform looming, few IDNs have the stomach, or budget, for such purchases. Today, they’re relying on data, communication and strategic thinking to guide the acquisition of devices and equipment. To borrow baseball parlance, it’s more like Moneyball than old-time, seat-of-the-pants scouting. Recent emphasis at the federal level on concepts such as “comparative effectiveness” and “accountable care” is fueling the trend.

“It’s all about definition of criteria and requirements,” says Perry Kirwan, senior director, technology assessment and capital planning, Banner Health, Phoenix, Ariz. “We spend an incredible amount of time on this process, but easily, the greatest amount of time is spent really determining criteria and what we require of the technology. We make sure we have that right, and that it works for everyone across Banner.

“Once we have that in place, then it’s very easy to do a comparative-effectiveness-type analysis. You’ve created the whole basis of how you will measure it. Without that rigorous work, how can you [proceed]?”

No longer can physicians or department heads demand – and get – new technology simply because they had it where they used to work. “We don’t have enough resources to conduct random clinical trials in every conceivable category,” says Don Klusmeier, supply chain director, Sisters of Charity of Leavenworth Hospital System, Lenexa, Kan. “But people know we’re watching, and that we need more than marketing literature or personal intuition to carry the day.”

From the top
It’s no secret the federal government is pushing evidence-based medicine. No longer do the feds want to pay for procedures and technologies that don’t improve health.

“There is an urgent need for action to change how the nation marshals clinical evidence and applies it to identify the most effective clinical interventions,” wrote the Institute of Medicine in its 2008 report, “Knowing What Works in Health Care: A Roadmap for the Nation.” “The nation must significantly expand its capacity to use scientific evidence to know ‘what works’ in health care.”

For years, the Agency for Healthcare Quality and Research, part of the U.S. Department of Health and Human Services, has been studying the quality and effectiveness of technologies and procedures. But today, AHRQ is joined by a host of other committees and initiatives. Some examples:

  • The Patient-Centered Outcomes Research Institute, or PCORI. Created by the Patient Protection and Affordable Care Act of 2010 (the healthcare reform law), PCORI is a nonprofit organization designed to carry out research projects that provide evidence on how diseases and other health conditions can most effectively be prevented, diagnosed, treated, monitored and managed.
  • The Federal Coordinating Council for Comparative Effectiveness Research. Authorized by the American Recovery and Reinvestment Act of 2009 (also known as the stimulus bill), the Coordinating Council is charged with developing and disseminating research on the comparative effectiveness of healthcare treatments and strategies.
  • The Committee on Comparative Effectiveness Research Prioritization, which is charged with identifying health topics that should get priority attention and funding.

Hospitals and IDNs know they must do their part too. “The challenge to the organization and the supply chain is, ‘Are our business plans comprehensive and intellectually honest?’” says Klusmeier. “You have to look around and ask, ‘What are some of the white elephants we have inherited, and what can we learn from them?’”

IDNs could fill museums with technologies that looked promising when acquired, but which fell short of expectations, says Klusmeier. Take CT colonoscopy.

The technology attracted attention because it is less invasive than traditional colonoscopy. But after it was introduced, experts questioned whether the convenience of CT colonoscopy outweighed the risk that additional radiation presents. Furthermore, questions arose as to whether patient convenience really is that much greater. After all, patients still have to clean out their bowel prior to the procedure.

“You see equipment and devices diffusing quickly in the absence of evidence of long-term efficacy, let alone superior effectiveness,” says Vivian Coates, vice president, information services and health technology assessment, ECRI Institute. She points to proton beam therapy and robotic surgery as two examples. “There’s a lot of hype, but there’s no evidence of long-term superior efficacy.”

The government and IDNs aren’t the only ones interested in trying to define the benefits of new technologies. “Payers are very interested, as they should be,” says Coates. “They don’t want to pay for products or procedures that don’t work very well, or for products that are more costly than existing alternatives, in the absence of evidence of superior effectiveness.” That’s why some payers are incorporating the results of comparative-effectiveness research into their clinical policies.

“We would advocate that testing against existing technologies be part of what happens during the development and approval phases [for new technologies],” says Susan Pisano, spokeswoman for America’s Health Insurance Plans, the Washington, D.C.-based association for health insurers. “And we would support continuing post-market surveillance.”

First adopters
Supply chain executives, clinicians and administrators know that the newer the technology, the less clinical data will exist to support its effectiveness.
“Banner Health doesn’t want, in all cases, to be a first adopter,” says Kirwan. “We don’t necessarily look for things on the bleeding edge, but things that are two or three years along the product life cycle, proven technology with some data attached to them and supported by clinical evidence.”

That said, Banner Health’s clinical innovations group is charged with investigating so-called “disruptive technologies,” that is, those that represent a significant departure from the status quo. “If the benefit or perceived benefit [of the new technology] is greater than the risks, that can help you take the plunge,” says Kirwan. But at Banner Health, like many IDNs today, those risks are not taken lightly. “It’s tightly managed, it doesn’t happen randomly, a lot of people know it’s going on, and we have definite expectations about what will happen in the deployment [of the technology],” he says.

Moving target
Evaluating the effectiveness of devices and medical equipment is more complicated than doing the same for pharmaceuticals, points out Coates. “Devices models are always changing, so they’re harder to study. A study can start using one model of a device, then the manufacturer may come out a year later with a new model, so it becomes a moving target.”

Nor can IDNs expect the Food and Drug Administration to offer evidence of a device’s effectiveness, she adds. Many medical products are cleared for marketing through the FDA’s 510(k) process, which merely establishes that the device is “substantially equivalent” to a device already on the market (a so-called “predicate device”).

‘Rational clinical use’
Like ECRI, the University HealthSystem Consortium has been assessing health technology for a long time – 20 years, says Joe Cummings, manager of technology assessment. “We were really one of the early adopters of the concept of technology assessment,” he points out.

The mission statement of UHC’s Technology Assessment Program calls for it to “keep UHC members at the forefront of technology assessment, acquisition, management and rational clinical use,” says Cummings. “Even if you do acquire a technology, how do you use it once you get it?” he asks. “We always touch on patient selection criteria, that is, who are the appropriate patients to use it on? Then we make recommendations for our members.”

One thing that has changed over the past 20 years is the kind of reports UHC produces for its members. “Technology assessment in its pure form [results in] reports that are many hundreds of pages long, are highly labor-intensive, and are the be-all and end-all of any given topic,” says Cummings. “Some groups and agencies are still doing that.” Although UHC can still produce such reports, the organization has focused more recently on producing shorter, more focused “tech flashes.”

Of the more than 8,000 technologies cleared for marketing in the United States every year, UHC considers between 400 and 500 of them to warrant some kind of technology assessment, says Cummings. That’s beyond UHC’s capabilities, which is why the organization signed an agreement with ECRI in November 2009 to provide its members access to ECRI’s library of current technology research, to supplement UHC’s technology assessment activities.

The elephant in the room
In any discussion about health technology assessment, the elephant in the room is cost. When the Obama administration allocated $1.1 billion to comparative effectiveness research in 2009, the alarm was sounded that such research would lead to “death panels” and rationing.

“I think this has to be worked out politically,” says Cummings. The Centers for Medicare & Medicaid Services considers cost to be off limits in its coverage determinations, he points out. Furthermore, PCORI is forbidden from including cost-effectiveness in its deliberations.

But Cummings belongs to an international organization – the International Society for Pharmacoeconomics and Outcomes Research, or ISPOR – whose members widely believe that health technology assessment cannot be done without considering cost in the equation. “There are definitely conflicting opinions about this,” he says.

“The cost issues aren’t going to go away, even if the current political climate is deeply mistrustful of comparative-cost-effectiveness research,” he says. “It won’t be done by the PCORI. Instead, the people who will do it are those in the private sector.”

Today’s emphasis on comparative effectiveness and evidence-based research is “raising awareness of the gaps that exist in evidence, and the need to do more primary studies, such as head-to-head trials,” says Coates. “That’s a good thing. But I see a lot of countervailing forces, a lot of detractors; people who misinterpret what comparative effectiveness research is trying to do. I don’t know how that will play out.”

What’s an IDN to do?
Most IDNs lack the manpower or resources to conduct exhaustive research on the comparative effectiveness of various health technologies. Rather, they must leave that heavy lifting to organizations such as ECRI, UHC, healthcare alliances or the AHRQ. But that doesn’t mean they can drop the ball.

The task for IDNs is to focus on how to implement the findings of high-quality health technology assessment. It’s not necessarily a one-size-fits-all deal, says Coates. “For example, whether a hospital would find silver-coated catheters useful may depend on their care processes,” she says. And 64-slice CT imaging may be appropriate for a hospital that sees a high volume of patients at moderate risk for coronary artery disease, but not for a hospital that does not.

“My reports are used as a tool,” says Cummings. “There may be other factors going on at your hospital. Maybe you’re a center of excellence, or you’re [acquiring technology] because of your teaching mission. That’s fine, as long as you go into it with eyes wide open.”

IDNs should have some kind of process and committee in place for technology assessment, he says. That committee should have institutional support, a formalized process for how it will conduct its research, and criteria for selecting topics to consider, he says.

“You need C-suite support, undoubtedly,” he adds. So too is active participation from key clinicians. “I’ve seen well-run committees run out of the purchasing department, but they had strong support from the C suite and…a lot of participation from the clinical side.”

Examining the design of care
It’s clear that IDNs have taken note of today’s emphasis on health technology assessment and comparative effectiveness. “Regardless of what you believe about the politics of healthcare reform, it’s clear it will be a different day in the future,” says Klusmeier.

“To me, ‘value analysis’ is about price and utilization, and should you use a certain technology at all – does it get the desired result?” he says. “Technology assessment is more along the lines of, ‘Is it ready for prime time, and are there other viable alternatives? ‘Product evaluation’ connotes, ‘Does it work?’ without implying value or identifying alternatives.”

Although the term “comparative effectiveness” is hardly on the tip of everyone’s tongues at SCLHS, the IDN is indeed moving in that direction. Under the direction of Peter Wong, Ph.D., SCLHS’s vice president of quality and safety, so-called “strategic sourcing teams” are re-examining the design of care.

Physicians specializing in cardiology, spine, orthopedics, oncology and anesthesia are sitting down and identifying key differences in the way they practice medicine, and the reason for those differences, says Klusmeier. They are considering not only the impact of those differences on patient care, but on the cost of delivering that care. “Given the financial times, if we’re trying to live within Medicare rates, we have to ask ourselves, ‘How will we [continue our mission] consuming the resources we currently do?’” At press time, Wong was recruiting a director of pharmacy and technology to make the health-technology-assessment process more formal.

“I think in some of these categories, we’ll get to the point where we can really start to look at incremental benefits or level of effectiveness, and what is the cost of that?” says Klusmeier. If a new technology is deemed to yield only a miniscule increase in effectiveness, IDNs may take a pass on acquiring it.

Maintaining a leadership position
With 21 years at Banner dealing with technology management, Perry Kirwan believes he has a fairly broad view of health technology assessment. “I’ve had a lot of different looks and perspectives on technology acquisition, maintenance and support, and life cycle management.”

Two years ago, Banner launched a formal technology assessment program called the Clinical Technology Advisory Process. “Our concern was to stay competitive and state-of-the-art, to maintain our leadership and innovation position, and to continue to do that with limited resources,” says Kirwan. There’s no doubt that financial considerations were part of the reason for creating CTAP. “But we also wanted a means of assessing where we were in the market in terms of technology. We are the largest healthcare institution in Arizona, and we want our physicians and patients to know we are a leader in healthcare.”

With 23 inpatient facilities in seven states, Banner executives have in the past found it difficult to prioritize spending. So the IDN identified three focal points within the technology assessment group:

  • New technologies. “In the past, we didn’t have a consistent way of assessing whether new technology is appropriate, and if it is, how to introduce it to the different facilities,” says Kirwan.
  • Existing technologies. Banner wanted to put into place a process to examine technologies already in the IDN, and to forecast capital equipment replacement needs.
  • Standardization of technologies. The obvious financial reason for standardization is to limit the number of suppliers and leverage Banner’s volume accordingly, says Kirwan. But there is another reason, just as important. “Process control theory says that with more suppliers and data points, you introduce more variability in your processes,” says Kirwan. “If you streamline those, your processes – and therefore, quality – become more controlled, you have a better means of understanding where you are at any point of time, and when you implement any kind of change, it’s easier to gauge its impact.”

Banner created an oversight committee called the Clinical Technology Advisory Group, or CTAG, incorporating a number of corporate positions, because health technologies cross many disciplines. Chaired by Kirwan, the group comprises representatives from capital management, materials management, information technology, the clinical innovation department, design and construction, and strategic planning.

The advisory group takes a big-picture view of new technology. “Their main role is to make sure that any given technology we’re looking at is consistent with our corporate and strategic goals,” says Kirwan. The group may do a high-level literature review of the technology under discussion.

One example is biplane technology, a digital x-ray technology that uses two mounted, rotating cameras to simultaneously take pictures. “A number of facilities were looking at it, but they didn’t have an accurate self-assessment of how they would use the technology,” he says. “Because we have the CTAG group, we were able to ask the critical questions: ‘Is this valid? Is this really going to happen?’ We were able to suggest other ways that, if they wanted to proceed, they could keep the room busy and fully utilized.”

If the advisory group determines a technology merits further consideration, it passes its findings to an ad hoc group of what Kirwan calls “modality experts,” that is, people who will interact with the technology on a day-to-day basis. “These are the subject matter experts, and they do what I would call the meat-and-potatoes work of the process,” he says. They establish the criteria by which competing technologies will be judged, study potential vendors, listen to vendor presentations, and clinically evaluate the equipment, either through live clinical trials or site visits. Then, under the oversight of the advisory group, the ad hoc group turns its findings over to materials management for contracting and acquisition.

Initial misgivings
Initially, vendors were threatened by the process, fearful that it would lead to lower margins on their technologies. Despite their misgivings, most are now onboard. “What they’re saying to us is, ‘[The process] makes it easier to sell to you, because you’ve defined the requirements and criteria. We know what you want.’”

Administrators met some internal resistance as well, says Kirwan. “When people hear about something like this, their immediate response is, ‘You’re trying to take decision-making authority away from me,’” he says. “So at the beginning of these sessions, we address that concern upfront. We tell them, ‘No, that’s not the case; in fact, this process doesn’t work without you.’

“One of my measurements, in terms of effectiveness, is a very simple one: ‘Were you able to reach consensus and have the various teams buy into the decision? Did they walk away feeling it was a good process?’ The [ad hoc] teams were dubious when they started, but for the most part, they came out as believers by the time we ended.”

Another gauge of success is the fact that Kirwan’s group has more new-technology requests than it can handle. Banner is bringing on additional manpower to accommodate them.

Kirwan adds one last point, which he calls extremely important. “This process does not work at all without the senior management team of Banner supporting it. There’s just no question. When things get sticky, having senior leadership say, ‘This is what we’re going to do for awhile’ is incredibly important. That’s been the key to success here.”

safe online pharmacy for viagra cheap kamagra oral jelly online