Implementation Science Consortium in Cancer (ISCC) – Day 2

SARAH: Good morning, folks! I’m going to welcome you to take your seats. We’re going to get started in just a moment. Today’s session is being recorded; all remarks
will be captured for the online archive. As a quick reminder for folks in the room
when you are contributing to discussion: Please use your table mics so that our online participants
can hear the discussion. I’d also encourage folks to join in on Mentimeter. We had a wonderful response yesterday, and
we’d like to continue the trend. So—thank you for your feedback in that arena. And with that, I will have David come up. David? DAVID: So, thank you all for coming back. So, first off, you’ll see the Mentimeter—once
again—up, and it’s there because we actually want to learn from your experiences, and recognize
that some people may not have time to say—because they’re too busy networking—what their big
takeaway was from day one. We’re going to keep this open for a while,
just so that you’re able to, you know, let us know how things went, any feedback, any
sort of key thing that you left here and thought about, and—you know, again so that we can
learn—because the hope is that, as we try different things, [we] will get your take
on what worked well, and, you know, maybe what didn’t or how we can make things better. So, I see. Okay, great. These will start to populate at some point,
we’ll shift over, because we’re about to get into the report backs from the afternoons,
but before we get to that, I just wanted to sort of lay out: where are we going today? In case anyone left yesterday or came in this
morning saying, “Okay, well, that was interesting, but I’m really not sure where this is all
headed.” Right? So, the morning. Again, we’ll start out and you have your agendas,
but I’ll just quickly walk through that. We’ll start out with the summary presentations—that
our facilitators from the seven different groups—will give us, just to walk through:
What were the salient points from the discussions as well as really the charge for the afternoon—right? So, each group is going to present a set of
key themes—key next steps—that we’re going to want you all, in your afternoon, to work
collectively to try and flesh out. Right? So, we’ll have those report backs, we’ll
then get into another sort of fireside-chatty, town hall kind of thing. We tried to get a graphic, and—I think we
failed, right? That had—yeah, we were going to have some
interesting graphic that had, like, a fire, where out of the fire was coming these, you
know, scientific products or something like that. But sadly, our limited budget for art was—was
exceeded very quickly. But the fireside chat-like thing is basically
another sort of town hall discussion around an area that we continue to find to be sort
of captivating to us, but needing a lot more thought. So, that’s this notion of laboratories—so
to speak—for implementation. Of course, that’s one of the small groups,
but we thought that we might all benefit from a collective discussion about that. We have Noah Ivers—from University of Toronto—who’s
going to give us a stimulus presentation based on some of the work that he’s done with colleagues,
and then we’ll have a panel discussion, and feedback, and Mentimeter, and all of that
fun stuff, again—to learn as much as possible from all of you—that will then get into
the lunch break. Again, we do want you to take advantage of
the networking—of the potential collaborative opportunities that will happen from 11:45
to 1:30, and then from 1:30 to 4:30, is going to be what we hope to be a fruitful opportunity
for those small groups to then further divide and target the different ideas that the facilitators
are going to present to you. It’s really important to us that ideally,
at the end of today, we walk out with a number of different, sort of fleshed-out ideas. We have a template that I think you all were
sent yesterday in your email if you’re accessing it. And that, that template is what will enable
us tomorrow to be able to prioritize, to be able to take a look at the different ideas
to do—sort of a gallery walk. And a prioritization which, for us, is very
important. Because at the end of this, our NCI team is
wanting to see what is—you know—what is already sort of being developed. How can we strategically allow these ideas—these
sort of public goods, as we’re calling them—for the field—to move forward? And so, really important that you’re willing—and
you may find in your small group, “This wasn’t the idea that I came up with.” We’re hopeful that you’ll say, “We’re all
a larger team here,” and so—even if this wasn’t my idea, my expertise, my experience,
my thoughts are really important to contribute to what will be this one-page template. So, I’ll ask you to help us by dividing and
conquering what ends up being a large agenda. So that at the end of these couple of days,
we’ll have a fair number of these one-pagers to try to reflect on, and then figure out
ways that we can actually turn them into tangible products. And so, there will be a lot more on that down
the road. But today, this afternoon, this will work,
if everybody comes to your various tables with this willingness to take on this idea—to
put it through its paces—that’ll be the half, the first half of that afternoon. The second half of the afternoon will be coming
back together as a large or small group. And talking through, what did—you know—how
far did you get with your different ideas, getting comments, getting questions from the
other members of your small group, and then taking a further step back and saying, “Okay,
what are we missing?” “Are these the kinds of ideas that would
really help us if we wanted to do research in this field?” Does that make sense? Okay. And then again, for folks who are listening
in online, all of the presentations tomorrow morning will be reflective of what those small
groups in the various rooms— here at the NCI—accomplished, and you’ll also have an
opportunity to weigh in, to prioritize, and to help us think through, which direction
should we move forward? And so everybody’s participation—and again,
I’m seeing the condescension we need to avoid. That’s incredibly—lots of really important
things are continuing to bubble up, and our whole point of these two and a half days is
that the more that we can bring to the surface, the more that we can learn from, the more
then that we can reflect back to us all in the room, us all online, and anyone else who
is interested in joining—where the field can go, is what we absolutely want to do. So, keep these things coming, we’ll shift
them over to the PowerPoint slides. And so now, we’ll move into a sort of round
robin—hearing from our seven different groups. Reflections on what the—what yesterday afternoon
provided. I think we’re good. Just curious if there’s any quick questions—any
questions about anything I’ve said. That’s great. That’s remarkable. Thank you. Okay, so I will I will step to the side, and
Sarah’s going to run control. SARAH: Precision health group is welcome to
come to the podium. We’ll have your slides up in a moment. ALANNA: All right, so we get to go first. So—use the clicker, or is there … okay. All right. So. There we go. How about this? There we go. I can use the keyboard. Okay. So, just as a reminder: precision health,
big data. We talked about this yesterday. Just making sure that we’re talking about
the broad definition of precision health and big data—using as much data as possible. And while that does include genomic data,
it is not limited to genomic data. And that this is the data from multiple fields,
geographic information, sociodemographic information, as much information as possible on patients. So that, community members are populations—that
idea of delivering the right intervention to the right individual or population at the
right time, with the goal to improve health. So, again, and that the big data encompasses
all types of big data, clinical, billing, mobile health, whatever, big—all the data
that we can use for individuals. And so, we had some really fruitful discussions
with our first two groups. So, thank you, all of you participated who
in those. Our major thematic areas that we came up with
were some needs, in terms of methods—how do we do this? A lot around infrastructure and access, as
well as capacity building training. And context—and particularly around what
is context, operationalizing context, defining context—because we need that not only at
the individual, the organizational, the population level—hugely important to do this work in
precision health and big data. Also, talking about cross-disciplinary collaboration—and
while I know that kind of makes sense, we think about our teams and everything, but
we were also talking specifically about the informatics. The people who are in there, digging in the—the
big data—creating the big data sets, and the clinicians, and we all talk really different
languages and maybe don’t even understand each other. And then one big overarching thing was just
to make sure that—again, avoiding this potential issue of making sure that whatever we do in
this space of precision health and big data does not exacerbate, cause, or continue any
disparities. So, some of the things that we came up with
as our main ideas to expand a day to, and that, we will be putting on those of you today,
will be one of the ideas that we came up with—was a scoping review of implementation science
that is included in precision health and big data studies, and putting them on the continuum
of both cancer care translation. And so basically, expanding on what’s already
been done by Megan Roberts and others looking specifically at genomic medicine, but expanding
it to our larger definition of precision health and big data. Also, along the lines of that—where are
we? And that is this review of also what works
in what contexts—so, sort of an organizational phenotype-type thing. So, thinking about precision health, in terms
of organizations, populations, communities—what are those contexts? Some sort of list, or dictionary, or policy
that even talks about—that identifies—what is that contextual information that we need? And that must be included in these big data
sets, or other publicly available data sets, or EHRs, to develop tools or resources for
use so that we can have use cases, so that people can see examples of how to use big
data to monitor precision health. And also to facilitate ways to facilitate
this cross disciplinary collaboration that we’re talking about. Again, the big data people, the people on
ground, in the informatics, and the clinicians, particularly in everybody in between. So some, just some quick additional take-homes
you know, what we really came up with is, we’re really at the beginning and we need
to know every—we need to really identify where we are. What do we have, and what do we not have? So, there’s really a lot of meat here. Again, this major theme of cross-pollination,
between all of these different disciplines—precision health and big data overlap with almost every
other group in here. And again, that crosscutting theme of just
making sure that we don’t—that we—our goal of eliminating disparities instead of
exacerbating them or continuing them. And we did have this idea of well, again,
because we’re dealing with big data in that—make sure we don’t forget about that idea of data
privacy, and confidentiality, and I think that’s it. SARAH: Awesome. Thank you so much. And so, we have time for—if you hold on
for just a second—economics, I would invite your speaker to come up, and we have time
for just one or two brief comments if anyone has any thoughts about the precision health
slide deck that you just saw. And thank you—please do use your microphone. GRAM: So, my concern still sitting through
the group and hearing this is, we’re talking about big data applying to individuals, but
we’ve got a big chunk of our population that doesn’t even access healthcare, live in rural
populations where there’s no public health data. We’ve somehow got to solve that piece, or
the big data isn’t the solution to delivering precision health. ALANNA: No, that’s a good point. So—if I can rephrase that—is making sure
that we have—so I put that under the making sure we don’t exacerbate the disparities,
because again, if we’re looking at big data, but we know we have a huge hole of population
that we don’t have data on, that could lead to that. And, and—so that’s a good very point. And I think as we do the work again on the—the
context, and—and everything of just understanding where we are, and what we have, and tools
of what big data sets exist, we can also identify who’s not in those data sets. And I think that’s a good point that you made,
that we should make sure that that is a part of this identification process, of: what do
we have and what do we need? SARAH: Awesome. One quick comment. PRAJAKTA: Yeah, and I think you would be surprised
that the similar discussions happened in the health disparities and context group. So, there are opportunities to kind of talk
together about some of that stuff. ALANNA: Yeah, that’s again why we think it’s
very crosscutting. SARAH: Amy, did you want to grab a mic? Right in front of you. We all need more coffee. It’s okay. AMY: This is a great, great summary. I just had a question—to what extent—I
understand, at least in our experience in the VA—is that we have precision medicine
companies knocking at our door every day, and basically saying, “This test will help
you,” you know, or, that test. And basically, they’re marketing to our
providers—our front providers. To what extent do you see implementation science
informing provider behavior change around the use of precision health—particularly
to really understand how it’s actually being used in clinical care? ALANNA: No, that’s a really good question. And we did touch on that a little bit, in
that, in thinking through—this is really a space where things are moving so fast. And when we talk about implementation science
and the, the power of it is, is the place where—things are moving faster than we have
evidence to keep up with. So, it’s not like we have an evidence-based
intervention here on the shelf. We can take it off the shelf and figure strategies
to implement it in different places. That’s—that’s not what we’re dealing with—with
precision health and big data. We’re dealing with new things coming in every
day, and we have to help our providers and work with our providers to really think through,
OK, not only: “Yeah. Sounds like a great idea. You could identify all these people or new
things with this new technology,” but we have to look at it and see if it really works
in our context and—not only do we have to see that—if it does work, and it does do
something. Great. But we also have to get it out there and measure
these different constructs. So that other people can use it too. And we can advance the field. SARAH: I don’t know that Alanna knew that
I was putting her up to answering these questions. So, thank you. Wonderful job. Jasmin. If you want to come down. Part of the sort of the magic behind this
is that we are taking notes during this, we are taking your feedback and course corrections,
and we are incorporating that into our meeting proceeds. So, it is very helpful when you have these
kinds of questions. We don’t necessarily have an answer for all
of them in this moment, but we are capturing them and are going to work that through as
we continue to move this forward. So Jasmine, please. JASMIN: Good morning. So, I’m here to report back on the economics
working group, and we had a lot of really great enthusiasm and energy in terms of people’s
comments. Just as a reminder, the focus of this work
group was to highlight the paucity of methods and applications for economic and cost-effectiveness
analyses, and we’re really trying to understand kind of what is needed in the field to advance
it, both in terms of the scope of implementation and intervention costs, and who bears them. And in terms of the major themes from our
discussion, we first got to consensus in terms of what are identified barriers to capturing,
measuring, and analyzing costs. And what came up a lot was that—that, you
know—moving beyond the societal perspective, and highlighting the perspectives of patients
as other and small scale stakeholders, like local clinics, and that cost-effectiveness
analyses need to be responding to kind of what their needs are from understanding costs
and determining whether they should be adopting and implementing programs, the need for standardized
measures and approaches, a lack of shared nomenclature, which hampers communication
about the costs when engaging stakeholders through the pre-implementation, implementation
and interpretation process phases, and then really a fear about doing cost analyses right. Due to a lack of capacity of being able to
include economists on their team and what they could be doing in terms of measuring
costs concurrently with conducting their implementation science projects. So that it could lend itself to a broader
cost analyses after. And linking with other folks, like later. So, in terms of our main ideas to expand that
we’re hoping for help on, one is standardized measures and approaches and tools for cost
assessment. A second one is tools to engage stakeholders
around cost collection. And that was the communication piece that
we talked about, and whether we need guidelines for reporting costs specifically for implementation
science studies. We have guidelines for cost effectiveness
analyses, but not in the context of implementation science. A framework and a guide for clarifying cost
analysis goals and conceptualization, and which measures are appropriate to capture,
given the goals of the analysis to help people move through the various phases of conducting
the project. And then also thinking through how we might
build capacity through smaller training, short courses, and/or improving networking among
teams, etc. So, on that note—those are the things that
we’ll be brainstorming more about later today, and I don’t want to say this, but—if
anyone has questions? SARAH: We do have time for one comment or
one quick question. Thank you Jasmine. And would I also encourage you to—if you,
if you are not the person who’s going to raise your hand first, to put your comment in on
Mentimeter and so, if we could throw it to the Mentimeter slide for a moment so folks
can see the code—we do have time for one quick comment. Or maybe, Jasmine, you’re off the hook! I think you’re off the hook. Sounds good. And again, please do submit your feedback
on Mentimeter, and yes—rapid cycle designs, you’re up next. Sorry Brad, I’m going to cut you off real
quick—the capture the code for Mentimeter code is 17 35 47. So again, if you have feedback about this
morning’s session, this is also the opportunity to submit to this Mentimeter. Thank you. BRAD: So, I will practice what we preach and
go through our presentation rapidly, because we covered a lot of ground. So, our goal, our mission, our charge, was
to provide guidance and help our field do our work faster and better. And there are two main categories of motivation—background
issues that really drive our work. The first we might view is the two original
sins of the field of implementation science—one of which Rinad Beidas covered for us yesterday. And that is the fact that our field is founded
on the idea and the critique of our clinical research colleagues who publish their work
and walk away, and assume that it’s either self-implementing or that —it’s somebody
else’s problem or somebody else’s job. We suffer from the same challenges, Rinad
pointed out, that we too often publish our work. We too often conduct efficacy-oriented studies
that show that something can be effective in a sample of healthy, white males with no
comorbidities, and a sample of high-resource settings with lots of grant support, and so
on. So, we need to make sure that we are conducting
our work in a way that’s actually useful, and used by our decision-makers and our policy
and practice colleagues, and that requires that we move through our projects much more
rapidly. The other main background issue in motivation—we
might view as our third major sin. And that is the fact that we were all trained
to think that the RCT is the gold standard, that we need to achieve internal validity,
that we need to generate evidence. And, in fact, that’s not at all the case for
implementation strategies, which are complex health interventions rather than simple interventions. We need rapid cycle approaches to allow us
to deal with the fact that our interventions can and should be modified and tailored, and
we need to derive and extract evidence very rapidly from our studies so that we can tailor,
we can modify, we can adapt. Russ talked about the need for us to unlearn
many of the lessons that we learned in grad school. We know that concepts such as manualized interventions,
fidelity, evidence-based intervention strategies or implementation strategies, which ultimately
don’t exist in our field—all of those are concepts we’ve internalized because of our
trainings, researchers, and because of our desire to emulate our clinical research colleagues. But that’s not the world that we live and
the world that we work in, and we need to recognize the challenges of complex health
interventions. And we hope that rapid cycle approaches to
research are one of the solutions to these challenges. So, several major themes emerged from what
was two sets of very rich discussions yesterday. First of all, what does rapid cycle mean? What are rapid cycle approaches? We view them not just as a PDSA-type designs
and others that are study designs, but instead approaches that apply to all faces of research. We spent some time talking about ways in which
we can gather outcome data more quickly. Ways in which we can share the results of
what we do more quickly. So, each of the phases of the research process
is amenable to compressing the timeline, accelerating the process, and that’s really the way that
we viewed our mission and our goal, to view the entire phase. We talked about the fact that the selection
rapid cycle approach is their feasibility, their relevance, their applicability—varies
by the type of research question, by the type of intervention. We know that very simple interventions—we
talked about the example of Web design and Amazon’s webpages and small tweaks that can
be made very rapidly, very simply, and where data and evidence can be extracted very simply—whereas
a major redesign of a healthcare delivery system is not the sort of thing that can be
modified very quickly and studied in the same sort of rapid cycle approach. So, thinking about the kinds of research questions
for which rapid cycle designs are amenable and suitable is one key theme. The context matters, of course—some sites
are not as nimble as others. Some don’t have the kind of electronic data
that we need for rapid cycle data collection analysis, and other factors. We spent some time talking about the fact
that the debate and the discussion about how improvement science and implementation science
relate to one another, and how they offer solutions to some of the challenges and gaps
in each field is closely related to this broader issue of rapid cycle approaches. But also again, as I indicated in the slide
and our motivation, the fact that research practice partnerships and our need to work
closely with our policy and practice colleagues, our operation stakeholders, and to do our
work more rapidly, concepts of embedded research and stakeholder engage search. These two are closely related to our broader
theme. And we also touched on the fact that it’s
not just us in our research designs and approaches that are the solution. But instead we need to think about and innovate
funding timelines, peer review standards, and expectations of rigor. Essentially to make the world safer for those
of us who prefer to do implementation science in a very iterative manner, rather than the
five-year RCT. So, those are some of the key themes. We had a number of ideas that we collapsed
into three broad categories that we hope to work on this afternoon and over the coming
months. First of all, we need to develop the toolkit. We need to identify the full range of rapid
cycle designs and approaches. We need to draw from other fields, such as
information technology industry, but we also need to assess their relevance and their value,
their applicability to implementation science. Not all of the existing approaches from other
fields are necessarily useful for us. And again, we need to think about the full
range of phases, not just the study designs and methods. Once we’ve developed that toolkit, we then
need to provide the guidance and provide the training and help ourselves as a field and
others to use those tools. We need to develop the algorithms, and the
crosswalks, and the tables that show us which rapid cycle approaches are applicable to which
types of research questions and goals. We need to know something about the requirements
to use an approach. What do you mean? Or need, rather, by way of the frequency of
events. A rapid cycle approach for an event that only
occurs once every other day is not likely to be as useful as one where the event occurs
several times a minute. So, understanding the requirements, but again,
developing guides and how to use these approaches. And finally, we’d like to practice what we
preach and recognize that developing the toolkit and publishing it and developing some training
and dissemination strategies is not enough to achieve the practice change that we need
to achieve, and so we need to launch an intensive multilevel, multicomponent effort that will
change our field, our own beliefs, and knowledge, and attitudes, and practices, as well as those
of the key stakeholder groups that constrain and influence what we do. The funding agencies, the peer review committees,
journal editors, and so on and so forth. Just a few additional things, most of which
tie back to our three main goals or projects. The idea that the feasibility applicability
of these approaches varies by setting, not all sites have EHR data, and even those of
us in VA and Kaiser, that do have the EHR data have discovered sometimes a little bit
too late that the data are not quite as valid as we hoped, and we might have been better
off collecting primary data, rather than using incomplete and invalid data. But the, the ability of different systems
to use these rapid cycle approaches varies, and we need to recognize that. We also need to know that achieving our overall
goal, which is more rapid progress from identifying a question to producing the evidence in the
insights and guidance, or the findings, doesn’t just require conducting the study more rapidly. We can begin to release findings earlier to
generate guidance and benefit earlier, we can measure proximal outcomes for which we
only have to wait a few days to see the effects of our intervention, rather than waiting till
we see the mortality outcomes several weeks, months, or years later, and we can also—we
hope—take steps to where we reduce delays and regulatory processes. And finally again, reinforcing the idea that
progress requires that we apply our implementation insights and not just take one step to develop
and disseminate knowledge, but instead work actively to try to promote its use and its
implementation. So those are our key points, and I’m open
to any questions unless I exceeded my six minutes. SARAH: No, that was fabulous. I’m so impressed everyone sticking to time. We have time for one comment or two quick
questions. Please raise your hand and use a mic if you’d
like to dive in. In the meantime I will invite implementation
laboratories to come up. BRIAN: Thank you all. SARAH: Thanks Brian. Hustle, Russ. Great hustle. RUSS: Good morning. Yeah. I wasn’t quite clear if David was going
to do this or not? Plus, I needed my exercise. So, let’s see. Here’s my slight advance here. That’s good. First of all, I just want to say—reinforce
what Alanna said, to start with—I think the incredible opportunity as much as working,
drilling down on our seven areas, or silos is going to be the interface among and some
of the both already I’m struck by the complementarity and the potential for crosscutting issues
and things, so we don’t all just stay in our silos. Like we do in so much of science. Okay. Yeah. Thanks. Brian’s work was so important that I’m just
going to say the say that same thing that he did. I must be going backwards, I think, is what’s
going on here? Technology challenged. Okay. We had incredible discussion in both of our
sessions—again, which I think were compatible and complementary, but a little different—and
some really robust comments that we started. David and I kept focusing the group here,
and I think we addressed most of these at some point during our wide-ranging, unstructured
discussion, but key points we wanted to have our group focus on were: what are the characteristics
of successful implementation labs and what examples do we have out there?—so we don’t
just reinvent the wheel. And think, what are the lessons learned? And that sort of thing. And Amy that’s where Query came up, like,
about a hundred times and things. As well as PBRNs and some of the work others
have been doing. This focus on what public goods can be developed
both from multiple perspectives to help and support in advanced implementation labs. What work and public goods can implementation
labs themselves produce to help both the field, you know, and their laboratory settings, and
then multiple levels in terms of scope—both local, but national sharing and networking—again,
wide-ranging, fun discussions. One critical thing that came up was, I emphasize
the first word here—meaningful stakeholder engagement—not just lip service, OK—but
what is meaningful engagement? And how do we assess this? In terms of some of the characteristics there
was discussion—not total agreement—about maybe each implementation lab should have
a theme that would kind of pull things together. And also an evaluation component, one of the
key issues that I think, like, most important complex issues in life, we have many answers. Maybe it depends. But was this—if you will—kind of recurrent
issue about, does an implementation lab need to cover the full spectrum of the cancer control
continuum. And especially—does it need to involve both
clinical and community partners—again usual, the silos that research exists in. And there are a lot of pros and cons when
you start drilling down into doing one versus the other. I mean, in an ideal world, we would have a
lab do that, but then there are a lot of practicalities and real-world issues of doing that, and kind
of needs to be involved. Our group, like one of the other ones, had
focused on: what’s the distinction between an implementation science lab—it’s going
to advance the field and an implementation lab—that kind of does it and takes the science. And then a good discussion about governance
and in the importance of that, and that includes, you know, leadership, decision-making, budgetary
things about what determines, you know, what is possible and is likely going to turn out. So, the charge for those of you unfortunate
enough to be stuck with David and I this afternoon, to try and drill down or expand these issues,
here’s kind of the charge we’re going to give you. And you might be thinking about which area
you want to focus on here, but we thought the ideas that might be worth —at least
some of them deserving a study—was an initial environmental scan of again, learning from
other related activities that have been going on in other collaborations consortium, etc.,
and particularly with the focus on case studies and lessons learned. Second, trying to walk our talk and designing
for sustainment from the outset, OK—and thinking, particularly about some of these
challenges that are very likely, if not guaranteed to occur, with longitudinal relationships
like when there are gaps in funding or a down period, how to sustain engagement there. Measures and methods—common methods for
different issues—how we can share more so that science can advance, you know, instead
of each having our own approach, and, you know, showing my—my project is better than
yours. And this would include measures of implementation
science, measures to advance science, but also some common measures to look at the progress
of the labs and assess their development, and a key focus on pragmatic, I should have
put that word here—about what’s already being collected that can be used. So, again, it’s not another unfunded, unreasonable
mandate. And then, finally, this notion about being
sure to take advantage of the networking here and the coordination across labs and including,
maybe, the idea of some type of a clearinghouse to coordinate, in an intelligent way, resources
available. And I think that’s it. So, David, anything to add to that? How about the rest of the group before I open
it up—anything we missed? HEATHER: Thanks Russ. This sounds really great. I guess, as a community-based researcher,
I really struggle with our use of this term “laboratory”—especially since I’m working
with highly under-represented, under-served individuals, and in fact, for the P50 grant,
we experienced push back with the use of the term, and had to be very clear that we’re
using this term because it’s required, but we don’t think of you as guinea pigs, or animals,
or in this laboratory, and while this may seem to be a very simplistic point, I think
it matters. And I hope that as the group is moving forward,
that we’ll be able to think of more inclusive terminology, to get to the places and spaces
that are so important for us to cultivate to do this work. RUSS: Well said—that did come up. And I should have put it here, like, most
things in life, it’s considering context and branding, or whatever. There is another important contingent though,
that you might need to remember, and I’m going to probably say something I shouldn’t here. But in terms of getting you all funded, you
might think about the context of people that have a whole lot of laboratories—have been
IH, and maybe understand that to get funded—what a laboratory is. But I couldn’t agree more with the point about
you don’t I don’t think anybody would say you need to call it that to your community
or clinical partners. Good point. SARAH: Thank you. And Mindy, you had a point? Sorry. It’s important for our online participants. So, I can hand you this mic as well. MINDY: I was just curious if you had examples
of the different themes that you talked about across the different laboratories. RUSS: We didn’t drill down that much. I think one of the two—my cues went away
here, and maybe David can help rescue me here. But I think one of them was this notion about
meaningful engagement of your partners there. And how to get that, the ongoing longitudinal
nature, I think was important, and the sustainability. How you were going to do that, were cross-cutting. MINDY: I think we talked about examples. People might have methodologic themes, or
they might have topic themes, but that another approach might be that they had to work with
stakeholders to identify issues important to the community. And then that might be their theme. So those were some of the things that came
up at one of the discussions. RUSS: And there was also a discussion about,
you know, if one is thinking about coordination across the care continuum, that might require
different partners, different people and organizations involved in the lab, versus if you were focusing
more narrowly, say on in the cancer continuum on a particular state. And maybe the last thing was the notion about
what’s in it for me? From all the different perspectives—the
researcher perspective, you know, and the community or clinical partner perspective. SARAH: Wonderful. That’s what we have time for for this session. So, I’m going to invite policy, the policy
speaker to the front of the room. And thank you all for the comments. It’s really great to be able to capture this. BOB: Good morning everybody. Glad to be back with you. I’m Bob Vollinger, in the tobacco control
research branch here at NCI. So we are going. Let’s see. How am I advancing this? That’s backwards. The arrow off of it. So, I’m just start by reminding you of the
context that Cindy offered us yesterday about the context, the innovation interventions
and the strategy as a way to frame kind of where we’re kicking things off. I’ll say that we had a very lively discussion
in our two groups yesterday, with lots of great ideas thrown around, and our task was
to try to whittle those down and try to organize them and make sense so that we could be coherent
today, and the first focus was trying to frame things around the consortium goals that we
thought were most important. So, we know that this is a scientific crowd,
right? And so, of course, part of the task was to
focus on rigorous methods to develop policies. So that was one of the first things. But second was engaging the science and advocacy
that ensures that these policies are implemented in the broadest scope that we can do. And we know that that really is a way that
we’re trying to move things ahead, but to do that, we had to have clear definitions
of policy. And all of us think about policy in lots of
different ways, whether that’s legislative policy or worksite policies or other private
settings, and so we want to focus on how we can do that and, and ways that encompass both
the policy making and the implementation process. It’s important to focus on the domains that
we would be working in, and I think all of us have heard the phrase that all politics
is local. And I think it’s also true for policy. And so many of us have worked in trying to
advance policy change. And we know that the closer to the population
that we start, and we get these efforts going, the more likely we are to have success. And some things can only be done at higher
levels, but we need to think about that as when we’re framing kind of where we want to
move. And again, everything doesn’t rely just on
legislative action, so think about different private contexts that we can advance policy. So, some of the major themes that we were
thinking about in the group, and the group generated here, of course, we know how important
of stakeholders are. And our task is to bridge the research and
practice communities. We all have lots of different examples of
how we’ve done that—that’s really kind of essential. And one of the things that I have launched
here is a community engagement work group, and within the tobacco context, building on
our state and community initiative, and some of you in the room have been involved with
that. And the explicit purpose of it really is to
help make sure that the funded, the research that we are funding here at NCI is being disseminated
to the field. So, we brought in a lot of different practitioner
organizations as well as some of our other federal partners to help make sure that we’re
doing that effectively as we can. So, again, just the importance of metrics,
and measures, and the right frameworks. We want to kind of expand on that, and then
we all know that there’s lots of data resources available, and you draw on those on a regular
basis. The challenge is not what data is out there,
but how to make sense of that and make sure that we’re capturing the most relevant data
for the task at hand, and trying to wade through reams, and reams, and reams of data to make
sure that we’ve got the stuff that’s going to be most useful for whatever particular
policy we’re working on. And then the whole concept of competency and
capacity building around the implementation of evidence-based policies, and I just want
to take the opportunity here to emphasize or reemphasize the importance of this is a
two-way street, right? We heard that from Bob yesterday; that came
up in our discussions as well. And our task really is to listen and to learn
from the other folks who are part of the projects as well, and to make sure that that the practitioners
and policymakers—that we are open to learning from the practitioners and policymakers as
well. No condescension is a good capture phrase. So, of course, we had a lot of other ideas
that we needed to focus on in our next couple days. And so, we wanted to highlight some of the
things that we wanted to expand today. The mechanisms for increasing engagement of
broader stakeholders and numerous other disciplines. The legal community, the tobacco control legal
resource center is a group that we work with a lot, but also folks in public policy or
public administration. And the whole idea of interdisciplinary teams
that was mentioned yesterday really is important. One of the specific points here—Simon brought
out—was the idea of developing wikis that capture some of the gray literature. You know, as academics, most of us are worried
about trying to make sure that we’re getting all of our work published into the best journals,
and that’s of course important. But we also know that legislative staff and
policymakers are frankly more likely to go to Google to find their answers, than they
are to go to PubMed. Right? And so, it’s important to get our findings
out there as broadly as possible, and to think about other venues to disseminate things. So again, to think about how to just develop
our strategies for engaging decision-makers and unlikely partners. And again, our task here is to bring people
to the table who we might not automatically assume are involved, or who haven’t historically
been involved, but depending on what the particular policy task is at hand, reaching out to particular
populations that can help advance that, or they may have a stake in it. And, you know, again in the tobacco realm,
sometimes that has meant working with tobacco farmers, right? And I’ve worked extensively with folks in
North Carolina, Virginia, which are tobacco growing states, and they have reached out
to those folks, and some of their policy work, and not written them off as tools of the tobacco
manufacturers. Also in working in clean air policies. We have worked, sometimes in some cases, the
hospitality industry has been the enemy, because they’ve been funded by the tobacco companies. But, in other cases, if you can identify particular
workers who are at risk of these policies that are exposing them to secondhand smoke,
they can become advocates or supporters. And a good example is casino workers, who’ve
been brought into the fold. And some of those changes. So again, developing partnerships with science
and advocacy organizations to educate policymakers on the evidence is crucial. A recent example we had of that—one of my
colleagues here was involved in—is ACR, just hosted a briefing on the Hill a couple
weeks ago. And as you would imagine, or expect, there’s
a broad interest in electronics cigarettes, electronic nicotine delivery systems. And so, there was an interest in kind of doing
a briefing for the Hill staff on that. So, we came together under the auspices of
ACR with CDC and FDA to do a briefing like that, and there was a lot of interest in it. And it also gave us an opportunity to talk
about the whole issue of tobacco 21. If you’ve been reading the newspapers, that’s
one of the hot topics now, is states are advancing laws that raise the minimum age of purchase
to 21. So, some other ideas, just thinking about
the different policy levels to focus on. And then how we can conduct a broad landscaping
assessment—again, thinking about the different levels of policy change and what some of the
strategic and opportunistic chances are. So, again, Rani and I worked on this project
along time ago, and part of what we thought about there in the way we frame things is
that, of course, you want to be planning an agenda to work your policy agenda and be strategic
about that, and bringing partners to the table and thinking proactively. But at the same time, it’s really important
to be opportunistic. And by that we meant things come up and the
policy agenda, or in the just in the news, that you weren’t anticipating. And it’s important to have—be ready and
able, prepared to respond to those opportunities as well. And so, I think that’s really important just
to have your facts together and your coalitions together to be able to move those things. Another thing we wanted to focus on was the
whole concept of bright spots, right? It’s all—it’s easy to get caught up in the
failures that we have, and we’ve all experienced those, and we need to learn from them, but
we really want to emphasize things that have worked well, and to not come off as doomsayers. And then again, we talked about all the data
resources that are there, we want to capture those and take advantage of them as well. And then the competency and skill sets for
implementation teams—adopting the policy is only half the battle. And so, if it’s not being implemented well,
we really have missed an opportunity and, you know, I know that many of you are working
on implementation of your own work in your domains. One of the things that I have been involved
with here, you know, a couple years ago that the Department of Housing and Urban Development
made all multi, all public housing smoke-free, and so some of us federal partners worked
with them towards the adoption of that policy. But now, we’ve spent the last couple years
helping to ensure that that’s being implemented well. Because they’re, you know, public housing
is only a small component of the housing that supported affordable housing. And if there’s any hope of expanding that
to other types of multi-unit housing and other types of housing supported by HUD, that this
implementation has to go well, so we worked with them to ensure that that happens. SARAH: Wonderful. Thank you so much. And someone invite the context speaker up
for the next slide deck and again, as a reminder, please submit your comments as well on Mentimeter,
so that we’re able to capture that discussion alongside the notes we’re taking in the various
recording channels, etc. Thank you. PRAJAKTA: Perfect, thank you so much, and
the real great thanks for all the attendees of our two sessions. I’m very excited to present our group discussions. Just to begin with, we were really focused
on this question of how do we, as an implementation science community, advance and make more explicit,
the focus on health equity and context. So, it was a really nice rounding in the discussions
and we came back to this point often. We will start by presenting some low-hanging
fruits, and then we’ll go to the bigger issues that were discussed. So, one of the important discussions—that
started off with Rani’s point of always conducting a contextual inquiry before beginning
a D&I project—was really highlighted through several discussions. And what we decided, we were coming up with,
are there opportunities or avenues to crowdsource best practice ideas on how to conduct a contextual
inquiry? Are there methods, measures, landscape assessments,
that we could do using the bright spots, the positive deviants inquiry, to come up with
this list of best methods and approaches to do this? One of the second points that we focused was
on developing a standard checklist or a set of questions that we could use to judge whether
a D&I study achieved health equity within its setting. And this was more in the frame of publications
or grants that we would use this checklist to. And some ideas that were thrown around is
to go and look at the consort checklist, or the guidelines that the progress and community
health partnerships journal has, to see what we can borrow from these ideas to develop
such a guideline or such a checklist. We did not want to reinvent the wheel by creating
another model or a framework in this space, but rather how could we distill down and explore,
expand existing frameworks or constructs that are relevant to health equity in D&I. So, might we then frame it according to the
social ecological model—which is that multilevel influences the onion model that lots of us
know it—but also, while noting, what are some of the missing levels, what are some
of the missing constructs within those models? Such as community power mistrust within the
health equity framework. So, might this look like, as a scoping review
from relevant disciplines learning from anthropology? We had a lot of discussions on that—how
anthropology has already done so much of this, and how can we learn from them and incorporate
it into D&I? One of the very exciting ideas that a lot
of us got excited about was might we then approach some of the existing creators of
the D&I models, and ask them or convene them in a space to revise these or to reframe these
models to incorporate a health equity focus in them. So, can we use this opportunity to bring experts
from the health disparities research? Bring experts from the D&I research field? Other disciplines? And maybe this might look like a special issue
in a journal that reframes, with Russ Glasgow, you guys have been doing some of that inequities
reframing RE-AIM for health inequities. So, something like that for all, or some of
the commonly used D&I frameworks. Corresponding to the models, we were also
cognizant that as you think about models, you want to think about corresponding measures
to revise conceptual frameworks. Because what gets measured gets done. And there was a lot of discussion on the need
for a quick and dirty assessment. So, the rapid cycle approaches, we should
talk. And we should think about how we could use
pragmatic measures for health equity and context while incorporating the community level outcomes
that were on and on repeated during our discussions that didn’t have quite solid measures in place. We also discussed the need for identifying
a set of implementation strategies that were appropriate with a health equity focus. And there was some discussion, on should we
wait for empirical evidence to come up and wait to see what works, in what context, when? Or should we kind of have some sort of a guidance
statement that says, you know, here’s some of them that have worked in other contexts,
you know, should we engage in some guidance development there? And so, all of this discussion really happened
with three big, larger themes that were sort of in the background of all of our discussions. We did acknowledge that documenting variations
and identifying key dimensions of context was a very important point. We also thought very big and ambitious about
explicitly including health equity in peer review, just as we made progress in doing
sex as a biological variable, not just in terms of inclusion. Thanks, Dr. Bartel. And kind of making that a lofty but achievable
goal. Hopefully. And then we did also discuss, you know, what
does health equity mean? And do we need to define it as it applies
to implementation science, considering the factors of intersectionality, the resilience,
and considering the multiple dimensions of health inequity, and these we called our big,
hairy, audacious goals. So, we acknowledge that these may not be feasible,
but these are—there—thanks, Heather for giving us this terminology. We wanted to focus on these three additional
take-homes, that we want to recognize that we acknowledge these, in all of our discussions. We acknowledge, just like Bob brought up,
the unintended consequences of implementing evidence-based interventions, what should
we learn from these? We’ve done tobacco policy work, and how can
we not redo these things that have these unintended consequences on health equity? We know what health inequities look like,
but don’t necessarily know what health equity looks like. So instead of using this deficiency-driven
approach into going into populations, and communities, can we modify that and look at
a high resiliency approach? Right? So, these are communities that are working
without resources. So, how can we incorporate a resiliency approach
instead of a deficiency approach? And the fact that adaptations are always needed. They are, in every context, we said that we
have to adapt every evidence-based intervention to fit the context. So, how can we make these guidelines that
are specific to adaptations more equity focused? And while realizing the tension between localized
knowledge and generalizability. SARAH: That was wonderful and I’m nervous
to do this, because I think many hands will shoot up, but we have time for one or two
quick comments as the technology speaker comes up. Alright, technology you’re up. It was too good, Prajakta. That’s the problem. You just nailed it. RITA: So, first, let me say good morning and
thank you for participating in this very exciting workshop. I’m really happy to be here. And I also want to say that, from the time
I started in informatics, I’ve always viewed it as a social and behavioral science, and
I think we definitely are. So, let me just start with the thematic topic. So, I guess we’re starting with the premise
that technology has extended potential reach and effectiveness of evidence-based programs
and practices in our groups. Almost everyone has worked with something—with
some sort of technology—EHRs or m-health interventions. We focused our group on electronic health
records, just for simplicity, but we really do take a very broad view of electronic health
records as a platform. So, we recognize that patient portals are
part of this health care system, electronic health records system, the idea that data
flows, you know, the back-end of an EHR, you could imagine how the data is flowing from
devices or from other clinical information systems. And for that reason, we spent a lot of time
really looking at EHRs as a platform and at the back-end, the data, and the interoperability
of data between different information sources. So, the objective of our workshop was to identify
the ways that the consortium can work together in a common way on implementation science
questions relating—using the EHR as a tool to enable learning across multiple healthcare
system deliveries, delivery systems, community systems, and contexts. So, we had excellent discussions, and I was
really pleased to learn about all the ways in which people are involved in technology. So, just for simplicity. We took all the comments over the day and
really categorized it into these buckets and in terms of how people are using electronic
health records. So, one way is, I always think of an EHR as
a platform that you could basically hang an intervention or deliver an intervention. An example of those would be like, clinical
decision support. And one question—some of the discussions
around this major topic was, how do you incentivize? Most of our EHR systems today are proprietary. So how do you incentivize these proprietary
EHR vendors to make changes in the platform? I grew up in a world where we used homegrown
EHRs that were able to do a lot of innovation. It’s a bit more difficult now with these large
vendor systems. The other bucket was EHRs as a data repository
to study implementation. And questions around that were, how do you
ensure that important implementation science measures, such as contextual measures, process
measures to enable rapid cycle testing are, in fact, collected in the EHR. One of the first things when I, when I started
in the world of informatics is, I understood—I clearly saw that behind EHRs is a clinical
data model. So, if you’re trying to do behavioral interventions
or other kinds of interventions, like implementation science studies that look at variables that
fall outside of the clinical model, we need to worry about changing that. And then the third bucket was EHRs as a data
repository Again, it’s the use of the data behind the EHR platform to develop and to
design interventions. So, questions around that is, how do you ensure
that the data needed for tailoring interventions or delivering precision prevention, which
many of us are involved in, are collected? Major themes that arise from those discussions
were, you know, again, it’s a talk about stakeholders. Who are the stakeholders? So the idea is, we need change. That electronic health systems were really
not developed to do the kind of work that we do, but more, you know, direct patient
care and billing as primary purposes. So, to really think about using technology
and EHRs particularly, change needs to happen. So, in order to make that change, one question
is, who are the stakeholders needed to bring about the changes that implementation science
researchers need to work with EHR platforms more effectively? And the group came up with a number of obvious
stakeholders like patients, families, providers, payers, vendors, public health, c-suite, executives,
and communities, which I think is a really important missing part of EHR data today,
because we health, we know happens in communities. So, if EHR is supposed to support health,
communities need to be included. The other point that came up is how can the
consortium serve as a market force? Because change is hard. So—in all spaces—but in this space, because
again, we’re mostly working with proprietary systems, how can the consortium serve as a
market for us to influence its data needs? And modifications to EHR platforms. So, what we talked about there is, you know,
we need to be very clear about, what is the ask? And a lot of this sort of ties into the other
work groups, in terms of coming up with and defining a parsimonious set of common data
elements that need to be collected, and we probably are not there just yet. The other thing is, what’s the value proposition? If we want to make change, if anyone had to
try to do this in their own institution, you go in front of quite a number of committees,
and you have to make a very strong case for why the strength of the change has to happen. So, understanding what the value proposition
is, the consortium needs to build a case for why implementation science is important, but
we first need to do the groundwork to define the measures and then discuss why these measures
are important. In the space of data, we also talked about
the need for, you know, like an ontology, because we don’t want to have—we want to
be able to have interoperability across systems. Much of the research we do is across a number
of institutions. And then the idea of building a better mousetrap. So, everyone complains about EHRs. Always. Is there a better mouse trap? What does that look like? In order to be at the table, to really be
part of those discussions, is critical for this field, in our view. So, main ideas to expand: What are the bright
spots? People are doing this work now, but usually,
we’re doing it as anyone who’s working in the space using a lot of workarounds. So, clinical information systems exist. They’re usually varied. In my own work, I’m using scheduling data. I’m using, you know, the EHR data. I’m using all kinds of data. How do you begin to pull these different data
sources together? These are what we call workarounds, and it
would be good for the moment, because change is slow to really feature these bright spots
and tell the stories of success so that this work could happen while change occurs. The other thing is, as we talked about, the
consortium can serve as a powerful market force to influence data needs, EHR adaptions
to meet implementation science research needs. And, of course, the need for a data exchange. We talked about, who should we partner with? Who were the main organizations? We talked about the ONC, The Office of the
National Coordinator, that really was behind all meaningful use and did a lot of great
things, including, you know, things like documentation of smoking, etc. And it also, I should say, included the number
of institutions that are using electronic health records. So, I don’t have a shot of this, but it’s
pretty cross the country at this point. I think it’s up to over 90 percent, if not
higher, where we are now, working with electronic health records, even in rural places throughout
this country. And ONC is largely responsible for that, with
their meaningful use legislation. Again, we need a value proposition. There’s a long line of people who want to
do something with EHRs. So, we need to, in order to come to the front
of the line, there has to be a value proposition. Again, the idea of the data: We need to really
define a common set of elements and common methodologies. And then—capacity for building the field. We talked about training, maybe workshops
or research to really understand the language, being able to cross-talk and understand the
EHR research, understand the organizations that are key to influence change, things like
HL7 who really lead the way for interoperability, ONC for policy. Something similar to what already exists in
the mhealth institute. So that was our wonderful work groups that
we had yesterday. Thank you for everyone for participating. SARAH: And so we are going to briefly transition. So, once again, thank you to all of our speakers
for the small work groups, and thank you to all of you for your hard work yesterday. We are going to transition now to our panel
discussion. And so if you will stay tuned, we’re really
excited to have this discussion. So I’m going to pass it to David. DAVID: Great and panels, it turns out every
once in a while, have a sponsor, and this particular panel is sponsored by the Health
Systems Intervention Research Branch within our Healthcare Delivery Program. So now, with a quick word is the branch chief,
Sarah Kobrin. SARAH: So that was fun, because I had no idea
what I had done to earn the sponsorship. But now I know, which is that I’m here to
announce a job opportunity at NCI. Exciting news. I’m Sarah Kobrin and I’m in the Healthcare
Delivery Research Program and in our branch, as of this morning, I have learned on our
website is an announcement for a new medical officer opportunity to come and work in our
branch. And the reason I specifically wanted to tell
you all about it. Please share with your networks. Ask me, if you have questions, is that there
was so much discussion yesterday about the need for better communication flow between
people whose daily life is in care delivery and people whose daily life is in research. And so, we are exactly planning to bring in
a medical officer with recent clinical expertise in cancer care delivery somewhere, as well
as ideally, leadership experience and operations for the management of health care delivery. So, we want people to come here and help us
align our research priorities with the reality of daily, clinical practice. If you know people who might be interested,
please contact me, contact them, go to our website, we’ll be posting everywhere that
we can. Thanks very much. DAVID: Thank you, Sarah, and thanks to our
wonderful sponsors. So, just to set the mood. I think this is going to work. You ask and you shall receive. We are there. There it is. You’ll see the guitar in the background. That is for me. I know. And anyone else who plays guitar. It’s fun. And so, the, the sort of, ideally, culture
of these fireside chats is that we have a chance to, you know, pull up chairs and really
chat about things that are, in some cases, daunting—that we’re just trying to think
about. And so, as you’ve heard from now, a few different
times, this notion, even what it’s called, but this notion of a natural laboratory to
study implementation of all, you know, questions of all different stripes, is something that
we want to explore with this group. So, the way this fireside chat is going to
work, is we’ll have Noah Ivers who’s a family physician, is a recently, I think relatively
recently named research chair by CIHR. Yes, the Canadian Institutes of Health Research,
close enough. Research chair in implementation science,
comes to us from the University of Toronto, and has done some really great work recently,
trying to just help us all as a field understand what might be the promise of these different
implementation laboratories and, you know, and where can we go from here? We’re going to kick it off with his comments,
as you’ll see here, a brief slide presentation, and then we’ll turn our attention to the sort
of fireside portion of the chat with our three discussants, who we’ll introduce after Noah
is done. So, give Noah a warm welcome. NOAH: Yeah. Yeah. The one without the button, I got it. Hi, everyone, it’s wonderful to have been
invited. I personally never seem to get used to being
introduced at a forum like this and I started doing some, some Googling of folks who are
in the room and anyways, I’ll come back to that. But I have no conflicts. I have no specific objectives other than to
achieve whatever David set out for me. And so hopefully, I do that. I have a few caveats, though. First, I’m a family dog. So, you know, pros and cons. Second, I’m a Canadian. Again—go easy on me. And third, I think, you know, this relates
to what I was saying a moment ago, about Googling many of the folks in the room, I think, you
know, I feel like anything I’m presenting here, I get to present because I’ve been standing
on the shoulders of giants, who sort of started the field of implementation science, and currently
I’m suffering from an acute exacerbation of a chronic illness, known as imposter syndrome. So hopefully, you’ll bear with me. This is, from my perspective, the moment that
I started to think about implementation science laboratories. In 2012, a published systematic review in
Cochrane, looking at the effects of an intervention we call audit and feedback, which is when
we measure quality of care and give it back to healthcare organizations or professionals,
trying to spur them to improve. And then we did a follow-up study looking
at the progress of this literature over time, in terms of the effect sizes. And, basically, what you can see is that it
really, you know, the field does not appear to be progressing in terms of figuring out
how do we make this a more effective intervention over time? It continues to be highly variable in its
effects. It continues to work a little bit most of
the time, which to me doesn’t seem like science. It kind of just feels like a bunch of people
doing stuff, trying to make things better, which is super, but it’s—those are two different
things. And so, you know, we started to think about,
how can we—how can we make the progress of this field—about people using the intervention
of audit and feedback a little more systematic, a little more coordinated. So that hopefully down the road, we can see
these sort of charts moving in the desired direction. Skip ahead a number of years, and Jeremy Grimshaw
and I wrote a brief report for The Lancet about how, if we can do this better, we’ll
avoid research waste—because arguably a bunch of more audit and feedback versus usual
care trials that result in an effect size that’s more or less the exact same as all
the prior 140 trials, is not contributing to research knowledge, and we need to find
a better way forward so that we’re not wasting taxpayer dollars. And so, we proposed the idea of implementation
science laboratories at the time as a way of addressing that—that very problem. And I think in the red here is how we were
thinking about—how to create these things. That you start with an organization that’s
already delivering some sort of intervention at scale to optimize, you know, quality of
care—and that organization needs to be willing to partner with researchers, try to optimize
the effects of their intervention. Then you need to partner with researchers
who are keen to advance generalizable knowledge about how we optimize those interventions,
or how we solve that particular clinical problem, or what have you. But it starts with the fact that there are
implementers out there, doing this work, who are willing to partner. And we thought that would be a way to initiate
implementation research studies at scale to answer the kinds of questions that we think
really need answering. Now as I said, moments ago, we don’t need
another trial of audit and feedback against no audit and feedback. We kind of know we’re going to get. What we need now is more of a comparative
effectiveness paradigm where we try to understand, as I said before, how to optimize the audit
and feedback—or whatever intervention you’re interested in, or whatever clinical problem
you’re interested in. To answer those kinds of questions, you need
research at scale, which means you need to partner with organizations, or else you’re
starting from ground zero every time. And to take it step a further, I think and
I’ve already sort of alluded to this, I think there are two different kinds of implementation
science laboratories that we can think about. One is sort of intervention oriented. What I’ve mentioned, for instance, one focused
on audit and feedback, and another might be more sort of problem, or topic oriented. We could probably think of others as well,
but I think it’s worth differentiating these things, because one means going and finding
an organization doing audit and feedback—because that’s what you as implementation science
researchers are kind of interested in—and you want to partner with them to figure out
ways to optimize their intervention for their benefit. But also, you want to glean some generalizable
insights from it for other folks that are doing similar interventions. A totally different approach is to start from
the question of well, how do we improve cancer screening? And it may or may not involve audit and feedback. It probably will, because it seems to be ubiquitous,
but it probably involves all sorts of other initiatives, and it starts from understanding
the problem in depth. And you need to find an organization that
is absolutely committed to solving that problem. Not doing that intervention, per se, in order
to create that kind of lab. When we talk about audit and feedback—because
that’s where most of my work is, so you’ll have to excuse me—we imagine partnering
with organizations that are already doing audit and feedback, like I talked about, and
doing sequential trials, different forms of that feedback. So, you can see in the graph here, you might
compare version A versus version B, and that version B, maybe it includes some sort of
action planning module. I don’t know. You might find that version B is more effective,
because action planning is a behavioral sort of technique that generally works. And then you might go ahead and give everybody
in the jurisdiction B versus C. And in C, you might think, well, we’re going to call
people up and make an even more intensive action planning sort of approach, see if that
works better, even better than B. And you find that, you know, it didn’t make a big
difference, and it costs way more, so B remains the standard for the jurisdiction, or for
whomever that health service organization is servicing. But, again, you continue to test, and you
decide to test B versus D. And this time D is better, and so on and so forth. We imagined a scenario where you have sequential
trials occurring over years that both help the organization achieve their goals. But also, as I said before, produce generalizable
knowledge—in this case, you know, theoretically about how to deliver action plans in an efficient
way. And then we proposed this idea that many implementation
science laboratories, maybe all working on different kinds of audit and feedback initiatives,
could come together in what we call the meta lab to coordinate these functions. So that eight different groups around the
world weren’t testing the exact same thing. Maybe one did—because replication is a good
thing—but where we could prioritize questions that needed to be answered in a generalizable
way about how to optimize this intervention to maximize how effective it was to change
healthcare provider behavior. So that’s sort of what we’ve written about,
and this is an open access publication, The Lancet—not open access, sorry. But this one is, and it pretty much goes through
everything if you want to Google it. I just want to talk about some specific examples—what
I see as the opportunities and challenges of doing this kind of work. I’ve been involved in doing this kind of work
for, I don’t know, seven years or so now, and I think the potential opportunity relates
to the impact. First and foremost, you get to share this
tacit and sometimes explicit knowledge that the scientists bring—this stuff that’s written
down, and the implementers have so much incredible tacit knowledge about how to engage the end
user, what’s going to go over well, and what’s not, when you produce—put content in front
of their constituents. There is the potential for sustainability. This idea of sequential trials is actually
methodologically super interesting. And so, these things kind of tie together
and then, of course, the potential to do things at scale, which, if you’re any kind of health
services or implementation researcher, like, you know, that’s— that’s what we’re here
for, right? We’re here to make a difference for as many
patients or people as we possibly can. So, this kind of partnership seems worth the
effort in that regard. And I would say this kind of approach offers
opportunities for people who are interested in both causal attribution trial methods. Whether they be the rapid cycle type, or others,
and causal explanation—so folks interested in qualitative work or other types of embedded
process evaluations, understanding, unpacking mechanisms. When you’re doing this kind of work at scale,
you can embed these kinds questions, and hopefully answer them effectively. So, I see it as having big opportunity. And then we talked earlier, and over the last
couple days, about the idea of, you know, finding the business case. It obviously has to be mutually beneficial. And I think, you know, even if it is, there
are some challenges. And the one that comes into both categories
here, for both the health system partner and for the researchers, is that it requires compromise. And that’s I think the nature of every relationship. Right? You know, my wife will certainly tell you
that I need to compromise. Her, not so much, but … so, you know, I
think it’s really interesting to point out that a health system partner acting in this
way needs to be willing to say, they were wrong, or that their thing doesn’t work. And there are many political situations where
that is not okay. And so, they may not want to, you know, embark
on a project where that could be the result. And that’s a scientific problem for us, you
know, as, you know, … so, sorting through that is potentially challenging. They need to acknowledge that, like, you know,
the way that they have been creating their intervention for the last decade or more,
was kind of arbitrary. They kind of just did what they thought was
the right thing to do. And a lot of that was based on wonderful,
implicit tacit knowledge, but a lot of it, as necessary, was based on kind of best guesses. And getting them to admit that is uncomfortable
for many folks. For others who have a growth mindset, it’s
easy actually. They’re like, “Great! Help me do it better”—but you have to
sort of ease into that, in my experience. And I think for, you know, most of the folks
in the room are researchers—for us as research partners, you know, it’s kind of hard for
some people aren’t, some researchers aren’t up for this kind of work where they don’t
have control of the topics, the clinical topic, or the timing, or the outcomes, where the
science is really a first and foremost, a means to an end, as opposed to—for the publication,
or for, you know, lines on your CV, or for, you know, whatever it is that we do otherwise—that’s
uncomfortable. It affects, you know, if you’re junior,
it affects your likely to promotion, because you may not be as productive doing other things,
as doing this. Sorry. You know what I mean. So, that requires some compromise and some
things to sort out, you know, if a health service organization says, you know, we really
want to focus on antipsychotics and long-term care, which I’ll get to in a second, that’s
what you’re focusing on. So, find the clinical partners that know something
about that, and figure out if you can contribute to it sort of thing. That’s—that’s what this looks like. In my experience. Okay, so I’ll give you an example where that
exactly was the scenario. This I wouldn’t call an implementation science
laboratory. This was just, I guess, a typical, typical
embedded evaluation. We were brought in as the third-party evaluator,
where our provincial medical association, and our provincial sort of ministry of health
decided they needed to improve in psychotic prescribing nursing homes, long-term care
homes, and we convinced them to do a cluster randomized trial. We convinced them that, you know, that would
fit within their operations plans, that we wouldn’t delay them, and so on and so forth. Somehow, we won that argument. We embedded qualitative work, so on and so
forth, you know, a full program eval. We used admin data, pragmatic trial, that
sort of thing. But again, even in this one-off demonstration
project, we, you know, we had this uncomfortable situation of, like, we didn’t choose what
the core outcomes were, we didn’t choose what the intervention strategy were, we were there
to evaluate it. This is an interesting sort of entree for
me, to thinking about what it looks like to do partnered research. And, you know, I think one lesson learned
for me around this with the partners that we had at the time, is that the data analysis
came from admin data, which was delayed for reasons out of our control. I’m sure many of you’ve experienced that. And so, our embedded process evaluations was
actually what they made their decisions on. And so, the fact that we had those planned,
and we had planned interim evaluations with our partners was really, really helpful and
it’s something worth thinking about for future implementation science laboratory work. Okay. So, here’s what my main implementation science
laboratory looks like right now. You can see we have, or have done a bunch
of trials, almost all of them with our partner, who is Health Quality Ontario. They are the provincial advisor for quality
of care to the government in Ontario—Ontario’s jurisdiction of, I don’t know, 14 million
people. And they have already an initiative that relates
to sending audit and feedback reports to a variety of settings. Mostly we worked with them in nursing homes,
and in sort of primary care, because I’m a family dog. And so, we’ve done a series of trials that
each trial, build on the one prior and answer the questions that arise from that one and
onto the next. We meet together and we plan what the next
one should be, and so on and so forth. Just give you a couple of examples of how
this has gone down. So, the thing with audit and feedback reports,
and that’s Kevin Costner—not sure if he comes through very well—is ask them to come
closer for the picture. The thing with audit and feedback reports
is, you know, just because you send them out, doesn’t mean anybody looks at them. So, if what you’re interested in science-wise
is, like, hey, maybe we can report it, design the report this way, design the report that
way, and everybody’s kind of stoked about figuring that out, if nobody looks at it,
this question can’t actually be answered. So, that’s one of the things we learned early
on, is that the report seems to work, but it works if people look at, it doesn’t really
matter how you designed it comparatively. And so, of course, we had to measure, you
know—some sort of dollar figure to keep proving the return on investment. But there was discomfort with the idea that
if so many people weren’t looking at it, we couldn’t achieve our scientific goals. And also, our scientific goals were really
related to their program goals. They wanted to optimize the impact. So, you know, we got together, and we said,
all right, you know, we can’t necessarily make an answer to the question we wanted. We need to make sure we’re better engaging
with partners and users and so on and so forth and, you know, fidelity really matters. And so, we designed this whole other strategy
that was all about getting people to look at their B*** report. So, you know, first things, first sort of
thing. And so, this is the report that we try to
get people to look at. I wanted to feature this because it’s cancer. Not everything I do is cancer, obviously,
being a family dog, but here’s something that’s cancer. This is a report that goes to 7,000 primary-care
physicians in Ontario—gets updated every month. This is mine from a very long time ago. And the problem, again, is that people don’t
look at it. I look at mine twice a year, which is good,
compared to most. I swear. So, we came up with this idea that maybe,
you know, when Cancer Care Ontario, and the provincial agencies responsible, send a notice
saying, hey, your data is available, and they send it in a way that looks like it’s written
by a lawyer. Seriously. Maybe people aren’t really attending to that. And so, maybe that’s part of the problem,
as to why people aren’t engaging with this data. And what if we could embed a bunch of behavior
change techniques into that email to prompt people to access their data? Let’s see if that could help us towards solving
problem one. And everybody got really excited about that,
and so we’ve a got bit of funding and we did something fun, which was a two by two by two
randomized trial to test different ways of designing this—this prompt to get people
to look at their data. Because once people start looking the data,
then we can start doing interesting the questions in my mind, which is how do we organize this
data? And how do we support after they’ve seen the
data? And so on. Anyways, it sort of worked. We got people to look at their data and the
data, one of the interventions probably led to more cancer screening, which is fun. This is hot off the press. So, we probably got 7,500 more patients screened
in a four-month email study, which is cool. And then importantly, the qualitative process
evaluation has actually led to a change in how our partners are planning to do this kind
of tool, in that we learned that one of the reasons people don’t access it is they can
get a bunch of the data, not the exact same data, but a bunch of it from their EMR, and
they hate going to two places, like, somehow this is a surprise to them. So anyways, they’re now working on embedding
the results of this into the EMR, so it’s led to policy change, which is going to change
our capacity to work in the lab with them. So, some lessons and challenges here. So, we did this two by two by two factorial
trial. In it, that kind of trial design, we had no
control outcome. So, when we went back to the agency, we said,
“Hey, look! You know, one of these seems better than the
others,” and they said to us, “Well, was it better than what we were doing before?” And I said, “Oh. That’s the question you wanted us to answer?” So, make sure, you know, what question they
wanted you to answer and that you double check and triple check it and quadruple check it. So, we have to go back and find another way
to answer that. They really wanted a control group. And that would have been easy at the time. We had heaps of power. And then, you know, I think just making sure
that in general, you have the right question, remember we needed to figure out engagement
before we can figure out interesting scientific ones, and that in general, you’re like, the
amount of time I spend building this relationship is very substantial. And I think it’s worth it, because the potential
for impact is there, and it lets us do things at scale, and so on and so forth. But somebody has to be willing to make that
commitment from the team. Ideally, you know, many people on the research
team need to make that commitment, and that’s what my team looks like is that, you know,
I’m meeting with the lead at the agency every two weeks or more, when, you know, the fan
is being hit with stuff. And, you know, members of my team are meeting
with members of her team weekly to make sure we’re on the same page about each step. So, it really is a partnership where we understand
who can do what. We don’t deliver the interventions. That’s theirs. We’re there to help them evaluate it, to think
about what to do next, collaboratively working within their constraints and our funding constraints
as well. And so, I just, I would add just my last slide. I’m sure Sarah is going to kick me off. I would add, you know, like that investment
in a partnership needs to really, you just know to say one more time, that really can’t
be emphasized enough. It needs to be mutually beneficial, and it’s
only going to be mutually beneficial if it leads interventions, in my mind, that are
sustainable and scalable. And that means, you know, when problems are
arising, it’s all hands on deck to fix it. Whether those problems are on the partner—either
partner side. So, sometimes I’m talking to policymakers
that have an influence on the agency’s potential to do this work. Because if, you know, this thing falls apart,
you know, I won’t have a lab to work in anymore. So yeah, that’s basically it. SARAH: Thank you so much that for great presentation. If you have a round of applause, I’m going
to invite our speakers up and for those to take a seat at the panel table and then very
much, like CSPAN or ESPN. We are going to change the camera to. So, give us a second here for those of you
participating online so that we can have you all seeing what we are seeing. So thank you so much. And David,
DAVID: Thank you. I hoping was it would be one of those, like,
you know, dynamic ones where all of sudden camera one camera two. Apparently, no, I’m good I think I’ll stand. Give me a microphone and I lose all other
sense of. Now, this perfect I can hang out. Okay, so you heard you heard from Noah, sort
of kicking off the discussion. We wanted to bring additional perspectives,
additional expertise to the table, and then, of course, transition into that fire with
the guitar. No guitar at the moment. Stay tuned, stay tuned. So—so along with Noah, we have Melissa Simon,
who’s vice chair for clinical research in the Department of Obstetrics and Gynecology,
also, director of The Center for Health Equity Transformation at Northwestern’s Feinberg
School of Medicine, she’s been a past chair of our DIRH study section, the central place
at which our applications are reviewed as a focus on implementation science and health
equity. Next to her, Amy kilbourne, who is a PhD in
health services, director of VA’s Query, which I said before got multiple mentions here within
the within the VA. She’s also a professor of psychiatry at the
University of Michigan Medical School, focusing on implementation of collaborative care models,
adaptive trial designs, designing implementation strategies. Next to her, Simon Craddock Lee, a medical
anthropologist, former NCI cancer prevention fellow. So, we’re always glad to welcome back our
alums. He is also program co-lead for population
sciences and cancer control research at the Simmons Comprehensive Cancer Center at UT
Southwest. His work is focusing on implementation, focusing
on rural breast cancer screening, patient navigation, in underserved counties in Texas. Also does a lot of work directing community
outreach and engagement activities for the center. So again, lots of expertise, lots of great
perspectives here as well as in the room—want to try and just start off by getting our discussants’
perspectives, having heard Noah talk about a particular version, of an implementation
laboratory. But, basically, give each of you a chance,
just to kick off with your overall comments about this concept of an implementation laboratory,
and maybe even also for folks, a brief, you know, the brief introduction, as to how you
joined the sort of implementation science group as well, is that fair, Melissa? MELISSA: Good morning. Thanks for having me here, actually. So, why? I think implementation science actually is
the science that has the most opportunity and potential to do the most good on the ground. And that is my final statement. I think that my whole life commitment and
career is to health equity, and I am totally happy that I heard the comment from Dr. Bartels,
my follower in the DIRH study section as chair, and really bringing health equity, integrated
better to implementation science. So, the opportunity for these laboratories,
and I know the word has to be laboratories because it’s NIH, but in the real world, the
opportunity is to really do a changeup and think about design, architecture, and engineering
of how studies are done, how they are then implemented, and then how that policy practice
and research gap is closed. And, that’s really important because I also
serve on the United States Preventive Services Task Force, and one great example as a clinician
OB/GYN, is trying to implement these recommendations so that every single person actually gets
that recommendation and that care that is aligned with them and their needs within their
context, and making sure that anything that has been going on, in terms of the usual care
processes, like de-implementation, right? So, this is another part of what the fields
struggles with and is important in capacity building for these implementation laboratories. Not only do you have to de-implement certain
things, but you have to actually go to the concept of unlearning. So, how do you unlearn a way of doing things
that you’ve always done, right? Whether it’s for research, or in this case
for clinical care, trying to teach people how to unlearn the pap test guidelines, for
example, it’s like, beating my head against the wall. So, that’s a really important concept. And then the agility with which you can train
clinicians, and researchers, and policymakers to be able to adapt when we learn a new thing
from implementation science. AMY: It’s really great. I totally agree with that. And my sense is, implementation science is
a wonderful way of studying and doing research. But it’s also a wonderful way of giving back
to the communities that you’re working with. Because essentially, if you think about it,
most of the types of trials, or research, or things that you’ll be doing is essentially
comparing different ways of implementing evidence-based practices or, in some cases, de-implementing
low-value practices into communities. So, they did something. And I think in many respects, some of the
best ideas do come from the community practices themselves. I came into implementation science as a convert. I never studied it from the beginning. I was a clinical trialist, and then for the
first time—and this is why I think that if you want get more implementation scientists
out there, NIH writes more our RFAs and writes more program announcements for implementation
research. Because basically, there was one out, and
we applied for it, and it was a very trial, like baptism by fire. Think about the fireside chat, because the
very first time I did an implementation trial, not really knowing what I was doing was basically
I started knocking on … It’s all gumshoe epidemiology. I’m an epidemiologist by training—health
policy, health services researcher, clinical trialist back home, back to my roots as an
implementation person with a public health background. But the very first time was knocking, literally
knocking on doors of different community-based clinics in the Pittsburgh region. And essentially the first time I did that,
the absolute worst experience. Because I came in thinking, I’m the researcher. We’re going to be, you know, let’s do a study
implementing intervention X. And they said, you know, there’s the door. Goodbye. Essentially the next time I did that was I
went to another place and started the question little bit differently saying, tell us what
your problems are. What are your pain points? What’s keeping you awake tonight at your clinic? Tell us what, have you thought about some
of these solutions? Let’s talk about some potential evidence-based
practices that could work. How would you adapt them? How would you essentially do this? So essentially, allowing the frontline practitioners
to own the processes is the kernel that you want to start with, with implementation science. And imagine doing that in about 50 or 60 other
clinics and voila! You have an implementation study, right? So, it’s a little challenging, because part
of reason, especially for early career investigators is, that you don’t really have that capacity
to get 40 to 50 sites signed up. Unless you’re in Canada and maybe it’s yeah,
more top-down, like the VA does, they have this top-down opportunity of garnering clinics,
but even then it’s difficult, because essentially you can have a policy saying, thou, shalt,
VA is wonderful about policy saying, thou, shalt, you know, do this multi-drug resistant,
implementation strategy. And they think that if they put out policy,
everyone does it. Right? Well, it doesn’t work that way. Of course, we know that. So, you actually have to have that bottom-up
strategy where you really want to make sure that your practitioners own the problem as
much as you. So, I think my sense is, it’s a wonderful
field. This is a wonderful opportunity. I would say a final thing about the term “laboratory.” I know it can be kind of jarring a little
bit, but one way to reframe it as something that is, is what we’ve been doing in the VA. One of our key partners, our Query program—is
our implementation arm for our research program. Right? So, we also have an operation side. We have clinical operational partners. We have a robust system of, essentially of
programs that go out and write policy, but one of those programs started something called
the diffusion of excellence. They basically, literally empowered frontline
providers to come up with innovations and used a Shark Tank format like, on the TV show. Not as—not as brutal as the CNBC show, but
basically a Shark Tank format that allow these frontline providers to come up with an innovation,
a practice improvement, and essentially pitch it to other hospital directors and they, in
turn would invest in the replication of that practice. Basically, what that was, was a laboratory. They just didn’t call it that. But the concept of laboratory was embedded
into that frontline practitioner’s mind as someone as thinking of themselves as the
innovator. If you allow your community-based participants,
and your community-based stakeholders to think of themselves as the innovator, the term laboratory
will follow suit as a more, like, acceptable term. That’s my opinion. But I think that’s one of the things to think
about—is allow for frontline provider ownership as well as policy change. And I think, you know, those partnerships. And I think the three active ingredients,
my sense is, for an implicit implementation laboratory is quality moves at the speed of
trust. So, trust is key. I think you also have to have value. So, implementation strategies, that’s really,
I think where the field is going, we need to cost those out and show that you need to
pay for them at the stakeholder, leadership level, and then also you need to be able,
as a researcher, to find the handoff to the operational leader, so that they can sustain
the work that you’ve built up, including the implementation strategies, and showing them
those—the value of the implementation strategy. So, I’ll stop talking and pass it on to my
colleague here. SIMON: So, when I was here at NCI, I was the
social context guy, and I now have a different way of thinking about it. Thanks to Noah, which is I’m on the causal
explanation end of the spectrum, because I’m interested in trying to apply mixed methods
to quasi-experimental studies, mainly to test models of care delivery for safety-net settings
in cancer. But I’m still an anthropologist at heart,
and we think about long-term field relationships. So, my interests, and I think what really
resonates with me about thinking about for now, implementation laboratories, is the idea
of creating an ongoing relationship with a setting where you are working in partnership
about their local priorities. Yesterday somebody Mentimetered—is that
a verb—about community-based participatory research and I wanted to sort of unpack that,
because I think within that field, we’ve talked a lot about criticizing helicopter researchers,
and we tend to talk about that with respect to either communities or patient populations
where investigators are going in with a project and then they leave once they have their data. But hospitals and clinics are also community
partners. We had to explain that sometimes to the NIH,
because many of us come from academic medical centers, but that’s not the same as working
in community. But clinicians also need to know that when
they’re spending time with you, the last thing we all know—physicians do not have time. And their clinical teams do not have is time. And if you are asking to meet with them, they
want to know that this meeting is going to have a return on investment. Not just now, and this week, but multiple
years from now. Because otherwise, why are they going to return
your call? They have to return patients’ calls, not
your calls as researchers. So, the idea of an implementation laboratory
is intriguing to me. Because I think what it’s trying to validate
is the idea that you have a community of practice for the kinds of implementation research that
you’re trying to work on. Clinical teams understand that. I think the challenge in the conversations
we’ve been having today, and I’ll push on this a little because all of my work is clinically
based in health safety-net settings, but a lot of the world doesn’t get their care, cancer
control and prevention does not just happen in clinics. It can’t. If close to a third of our population doesn’t—is
underinsured. There’s cancer prevention and control happening
in other places. So, I think we want to come back. I think it was Dr. Brandt earlier who talked
about the laboratory label. We had the same challenge. I had a community partner say exactly what
I think Heather’s partners said, which is, “Be cautious about this laboratory reaction. That it gets among people.” Fortunately, the NIH has, I think came up
yesterday, has funded in before the NIH collaboratories. I’ve used the collaboratory for that reason. Because I think it really does send a signal,
but I think thinking about what the multiple stakeholder players are really important,
because community clinics, while they’re driven by their bottom line, and the revenue, in
order to be able to optimize the care that they want to provide, are also have multiple
stakeholders within their local and regional settings that we also don’t think about, right? So, it’s the multiple levels of stakeholders,
not just the local clinic, that you’re trying to partner with. And the bridge that I’m, I think we’re struggling
with over the last day, is trying to understand where community fits within the clinical settings
there, where I think most of the implementation research that we’ve talked about today is
happening, while recognizing that there’s implementation research happening outside
of clinics. And how do you bridge that within the labs? Is really what I think our growing edge is
going to be. DAVID: Yeah. And I think, you know, again where we’ve thought
about, and I’m going to try and tee up, if I can tee up Mentimeter again. We are teeing up a question for all of you,
wherever you are, and for our panel at this end. There it is, right? So, let’s take exactly what Simon had said,
the recognition that cancer control, at that, as we think about the cancer continuum, it
cuts across a whole range of different settings. The concept of this laboratory or collaboratory
or community of practice from which we can learn, is intended to map onto all of the
settings, where ideally we have an opportunity to improve people’s outcomes along the cancer
continuum. And so, what we wanted to ask of everybody,
and start with our panel here, is if we take on this concept, at least, as this environment
setting aside, maybe the label, or at least for now, talking about it as an implementation
science or an implementation laboratory. What do you see, and what do our panelists
see as a, you know, a key ingredient of that? Where we’re saying, okay, these are the hallmarks,
wherever the setting is, the population, and the folks that were trying to engage, what
do you see as key ingredients in order for there to be this ongoing opportunity to learn,
to innovate, to evaluate, to reflect, to adapt? So, maybe we’ll start on the end with Noah,
and we’ll come along, and awesome that folks are already tuning in and typing it. NOAH: Yeah, I like seeing the humility thing. So, that’s, I think I was referring earlier
to this idea of growth mindset. In other words, you know, being willing to
say, you know, how you were doing it before was wrong. And that applies to both the organizational
partner and the research partner. I think the other thing, the main thing I
that would like to add, is that we need to find organizations who already have sustainable
funding and a mandate to improve X issue in the cancer continuum, or who are already doing
implementation strategy. Why? Because they feel, or are committed, or are
funded to do so, to improve something about the cancer continuum. Finding those organizations and building relationships
with them, that are trusted, I think, is the, you know, the key ingredient. SIMON: So, I want to sort of spin this a slightly
different way. Something that Noah put up in his talk earlier
is, I think one of the key ingredients that that we struggle with as investigators is
this issue of control. And we’re used to be able to determine what
is the significance, what is our impact, we’re defining the landscape and the terms of our
research studies. That’s the way we were trained, but I think
we have a fundamental problem, because implementation science—I’m going to go out on a limb and
I’m interested to see if people disagree with me—implementation science is fundamentally
disease agnostic. We’re not. Investigators are funded by disease-oriented,
or system-oriented, NIH funding sources, or something akin to the NIH. So, there’s a real challenge there. You know, I’m responsible, for example, to
cancer center, support grant guidelines. My research is supposed to be cancer related. When you go out into a community and you talk
about their priorities, their concerns are about the revenue-driving motivators for improving
quality. And when they think about what their quality
metrics are, in many cases, it’s preventable ER admissions. Screening does not create a preventable ER
admission. It could, but that’s not what their orientation
is—they’re thinking about hospital discharge. So, I think it’s really challenging to think
about: how do we collaborate in a really new orientation to team science, as a key ingredient
for thinking about how do you get beyond your own disease focus and find the collaboration? It’s not just within your own personal team
of investigators, but across other disease site investigators, because the implementation
laboratory, whatever your constellation of partners that you’re creating, have their
own priorities. And in many ways, we need to sort of take
a more consultant, patronage approach to our community partners and asking them, really
what are their priorities? And what resources do we have that we can
help them meet what those priorities are? When I went out to the field to talk to a
large hospital system, I had a really great encounter with a senior leader. She thought, you know, this RFA idea from
implementation science was incredibly sexy—her word—she said, but Simon, that’s not the
issue. I’m on board, but my stakeholders are physicians
who are already engaged in quality improvement. If they think it needs fixing, they’re already
doing it. If they’re not doing it, it’s because they
have no motivator to fix it. It’s not a metric. Now, that’s challenge because you have to
dig into, well what metrics are they able to be responsible for and how can we help
them? But the fundamental conversation she had with
us was trying to understand: if my team came back, if the community partner came back to
me and said cancer prevention is really wonderful, we’re already hitting those metrics, thanks
for talking to us. Can you help us on X? What is my response? And I think thinking about, how do we change,
really change the frame on team science is going to matter. AMY: That’s a really excellent point. And also points to, I think, one of the big
gaps between trying to work with community based practices, and being researchers funded
through NIH, is that, if, you know, if you are funded by NIH, you’re out of university
and universities reward you for getting R01s, because R01s bring indirect costs and things
like that. Not to sound too cynical. And what we’ve tried to do, at least in Query,
is to be a little bit more broad thinking about how do we incentivize researchers to
be implementation scientists? And how to be implementation practitioners? And we can get away with doing that because
our funding is clinical care dollars, and it can be used to fund direct implementation
efforts, oftentimes without an IRB. So that helps a lot with our partners because
we can act/work with them more quickly. But one of the things we’ve started to experiment
with was, what, if instead of having investigators pick what disease or topic they want to study
and do an implementation project on. But why not have our regional health system
leaders called the VA integrated service network, or VISINS, had to throw out the acronym? I’m from VA, it’s a requirement of myself,
that what if we have the regional health system leaders use something like Mentimeter, which
literally we did live voting to pick among the nominated top, clinical priorities, pick
a top, two or three that they wanted our Query money to be spent on for quality improvement
purposes. And after a nomination process by which we
solicited through surveys and interviews, what was keeping them awake at night, they
came up with a list. And what was really surprising was, in the
back of their minds, they were, they were mentioning, Edith, they were mentioning quality
metrics. Absolutely. But it was almost like a conversation when
you interviewed these health system leaders, it’s like, yes, I know I need to work on
my readmission metric, but, you know, we’re really struggling with opioid and pain treatment. And so the idea of having a win-win situation,
where you can at least try to provide a quality improvement package that can maybe be about
changing or moving the needle on the quality metric, but also solving a more vexing problem
in their system, was sort of the sweet spot we were aiming for. So, after the live voting process, they picked
three priority topics of suicide prevention, pain, opioid treatment, and community care
coordination or care coordination across different providers. And essentially, we required that our investigators
co-lead with a regional network leadership person in application to basically they would
get up to, you know, a couple million dollars to basically improve the quality directly
based on these heat as metrics or whatever quality metrics they were benchmarked on. Their goal was to actually move the needle
on these quality metrics using implementation strategies derived from the implementation
science world. So, it could have been facilitation equipment
on and feedback or whatever. But you can do a study where you can have
a rapid cycle testing or smart design or something, or adaptive design, but your ultimate goal
was to move the needle on a quality metric, and essentially you were going to be implementing
evidence-based practices. So, we’ve just launched the first cohort of
these, and we’ll see what happens. We’ll see if it actually engages a situation
where you have our researchers incentivized to improve quality because, gee golly! We’re finally paying them to do from the
investigator side of things. MELISSA: All right, so I wrote down “humility”
before this started. And then the second thing I wrote down was
“entrepreneurship.” Like that mindset, that ability to be able
to do rapid processes, rapid cycles, iterative design, being agile, being able to unlearn
something and learn something new, being open. That’s all the bases of entrepreneurial spirit,
spirit and hiring, making sure people are on that team with that mindset in the lab. And then I wrote enlightened leadership. Leadership that can understand the value of
implementation science and what it brings, and to be able to design and align the incentives,
even though there may not be a heat as metrics. But to be able to design and align incentives
to make the case for why we have to do something, and why we have to be agile, and why we have
to try it this way and that way. And then, you know, foster things like the
Shark Tank kind of experience. And then obviously grounded in health equity,
as what was said being up here, and inclusion. But one other thing is that a proper laboratory
is usually funded by a funding announcement, or an RFA. And the design of RFAs, or funding opportunity
announcements, dictate and drive a lot of how a laboratory, or a center, or a research
project is set up. To talk about implementation science, the
funding opportunity impacts implementation, or de-implementation, or even sustainment
for that matter, especially patient navigation, being one of those interesting sustainment
conundrums. So, I think that that’s a really important
point of the laboratories, to have the right FOA. DAVID: And I think we all struggle, certainly
on the funder side, with certain limitations, in terms of the amount of time that we can
cover recognizing, and I think this has come up over the last couple of days about there
are certain stages of this process that are going to take a reasonable amount of time,
energy, ideally, funds, resources, to be able to make sure that everyone has a chance to
come together. And so, we’re curious if our panelists—just
keep typing in those responses. These are awesome. For the moment we’re going to press pause
on that. Okay, this is great. But given that and given that, we, you know,
with certain, RFAs have been trying to figure out. How do we embed this concept of a laboratory? Curious if there are other models that anyone
wants to bring up? They could be from other fields, other ways
in which you see the similar concept working, and is there anything thing that we can take
from those examples as we’re driving for engagement. We’re driving, ideally, for sustainment over
time, we’re driving for more sort of organic or grassroots identification, of what the
right questions are. Any and sort of anyone who wants to start,
but just any models that you can think of? NOAH: Yeah, so I would start just by, you
know, just by reflecting on industry, and thinking about how, you know, nearly all industries
spend some meaningful percentage of their work on R&D. And, you know, we spend heaps, and heaps,
and heaps, and heaps of dollars on health service delivery. But, you know, and when we devote pure research
dollars about health services, that’s one thing. But it doesn’t exactly look like what industries’
R&D would look like, if, you know, if they were doing R&D about health services delivery,
IT would probably look more like what we’re talking about with an implementation science
laboratory. So, that sort of comes to mind as a potential
place for learning. SIMON: I was going to build on that and just
to think that, you know, I know many of us are cautious about industry relationships,
particularly pharma, but I’ve had some really great conversations with pharma leaders who
are really very explicitly thinking about reformulating their entire investment portfolios
for community benefit, focused on health equity. So while many of us may think that an industry
is focused on their consumer target, larger industry partners are really thinking about
much bigger publics and ways to engage as a public good. And I think it would be an interesting proposition
to think about the relationships between pharma, and payers, and large employers that have
an opportunity to really be a different kind of laboratory, because their field of impact,
the scope, the scale that we’re all excited about, is so different when you’re thinking
about impacted lives. I think occupational health, for example,
does a much better job of engaging employers as targets for their interventions, but I
think we’ve also—I’ve had some response from local chambers of commerce, because they
have such a different perspective, but they are a captured audience, right? Employers and local chambers are inherently
interested in trying to create groups to make change. Now, their focus is a little different, but
I think their orientation as sort of public citizens is something we really should think
about how to capitalize on. And then, lastly, there are lots of existing
organizations who have better ties into grass roots, in many cases, been a lot of research
programs. So when we think about, I mean, the coolest
acronym ever, who does not want to be a member of NACHO? But two national organizations of both county
and city health officials, there’s a whole set of people who are looking—and think
I we touched on this yesterday—of practitioners who are already engaged at multiple levels
and who are those boundary spanners that we’re looking for to really advance a research opportunity. AMY: Those are really great ideas, and to
expand on those further, one of the ways in which we’ve built from the ground up some
partnerships with community practices is through batched, public private partnership funding. And we had a unique way of doing that in the
state of Michigan, because Michigan is one of a handful of states that has an active
Medicaid match program. And a Medicaid match program has been in public
law in a Medicaid system since the 60s. And it’s basically administrative funding
that goes to the state that allows the state to basically match one to one using Medicaid
dollars, CMS dollars, with a nonfederal source to work on quality improvement—essentially,
quality improvement activities that would benefit recipients of Medicaid in the state. And only a handful of states have been able
to take advantage of this. There’s some rules around whether or not,
I believe it has to be a state-funded institution, and things like that. But it’s really afforded a great opportunity
because it allows—it allows folks to double their money, if you’re working with the foundation,
or even your university department, they can double their money by getting Medicaid match. As long as the work that you’re doing is for
the services enhancement and delivery and in quality improvement of services related
to sites that serve majority Medicaid patients or consumers who might be eligible for Medicaid. So, it’s been a wonderful opportunity for
investigators to really promulgate their best practices or evidence-based practices, into
lower income community based settings, with a foundation, or a department with match funding,
being able to pay for that. So, there’s really this idea of skin in the
game ownership from not only the foundation or the department, but also with the state,
and the state in turn looks to those results through the Medicaid match program to see
about whether or not they want to sustain that over time. And some of them end up in many respects picking
up the tab for paying 40 implementation, or they pay for the services using reimbursement
codes if that’s something that they want to sustain over time. And we’ve done this in the state of Michigan,
primarily for telemental health, telepsychiatry, and also for child, adolescent health, and
for potentially billable services and collaborative care, but it’s just been a unique opportunity. And we pretty much and on the Query side have
stolen that idea. And created what’s been called a partner to
valuation initiative, where we have our operational leaders pay for—primarily they pay for the
majority of an implementation evaluation that gets conducted on a topic that they’re passionate
about, that keeps them awake at night as a regional leader. And then essentially, we solicit that process—the
opportunity for investigators. So, the investigators, on the one hand do
lose some autonomy. It’s not the topic that they want, but oftentimes
some of the best ideas come from our community partners, and sometimes that’s a great way
of being able to conduct a lot of really robust work in implementation. So, it’s just something to think about and
seems to have worked in our mission and integration partnership. MELISSA: I have three solid examples across
different institutes. I’m funded by six different institute’s at
NIH and there’s three examples that could be used in different ways. So R-24 mechanism that was used by NIMHD a
while back had three specific parts. There was a three-year component of the partnership
type phase. There was a five-year implementation phase,
and the two to three-year dissemination phase. And each one you had to compete for but it,
it helped walk people through the various phases intentionally. Remember the design of the FOA makes it makes
a big difference. Second, the NCI, I’m one of the PI’s of the
NCI, CAPTCHA grants so the comprehensive partnership to advanced cancer health equity and what’s
really important about the structure of that RFA is that it requires the comprehensive
cancer center to partner with minority-serving institution or institutions. So, one to two. So that requires the mindset upfront and the
infrastructure, and the budget is equally allocated across an MPI format. So, you can have center that’s not in one
institution, which is enormously important in this type of work. It’s more inclusive. It levels—it levels the playing ground. It requires a community engagement core as
well. So that structures are really important one
would that I would add to that pot. And the third one is from NIA. There’s a Gemstar mechanism, and the Gemstar
is like an RO3, but it’s a little bit larger because it requires the investigator to get
outside funding, like, from AOA or other aging agencies to add to the pot of funds. So, it becomes like an RO3 on steroids, and
the important part about this is it, it takes investigators from junior investigators from
outside of the aging field and use their, so I got one of those a while ago, so use
their lens. So, for me, it was OB/GYN, the gynecology
side, not the OB side. And then apply it, aging to it. So, it’s a way of again forcing that structure
getting more of the pipeline in so, those three examples. DAVID: Okay, so great to see all of these
responses. We’re going to move on to the next one, which
I think we can do, hopefully. Looks like there’s a right arrow at the end. Yeah. And we’re actually going to multitask because
we have about 10 minutes left, and we want to make sure that we get additional questions
from, from all of you. So, the question that we’re trying to queue
up for folks to be able to help us with. There’s been a lot in the scroll, a lot of
comments about the importance of engaging stakeholders. You’ve heard some potential models for that. We’d love to capture more of them, so this
question we’ll keep open for while, so that hopefully you can give us your, you know,
add your wisdom to the mix in terms of this really important—this crucial concept of—whatever
our stakeholders are within the specific theme that we’re trying to advance, how do we best
engage them? So, I’ll be looking for that to go from zero
ideally, to 80, 90, 100, 110. Why not? And that’s to all of you who are listening
online. But so, in the meantime, just want to spend
the next few minutes, just seeing what questions, comments do folks have in the room, and so
we’ve got the table mics, most of them should work, but we’d love to see what questions
you might have on this topic. BRIAN: Thanks for the end of the presentation
panel. Noah. Question about the implementation science
lab concept. This is Brian Mittman. You know, I can see the applicability, the
feasibility, and the value of using the laboratory approach when we’re dealing with simple fixed
interventions where we’d like to know if A is better than B, where A is always A, and
B is always B, and for highly robust implementation strategies where they’re relatively invariant
to contextual factors, and so on. The question is how the concept would apply
to the interventions that we more often than not encounter where the fact that it works
for one site at one point in time doesn’t mean it’s going to work again, you know, two
months later or the same intervention A has highly variable and differential effects across
different sites. So, in other words, we need to tailor and
modify, and we need to iterate and keep on top of the heterogeneity, the variability. NOAH: So, first of all, I think that there’s
no reason why you can’t run pragmatic cluster trials on interventions that require adaptation. I think the issue is, you still have to have
a sense of what the—what is the core, and what sorts of adaptations are going to be
enabled, and you have to, you know, have an embedded process eval to understand, you know,
what happened, and why, and how, and so on. But the sort of dichotomy of trial-based learning,
versus other kinds of learning aside, you know, if the core of the lab is about a partnership
between those interested in doing their implementation as best as they can and researchers interested
in gleaning scientific, sort of generalizable knowledge about that process, you know, interventions
like the ones you talk about that are messy and, you know, needing adaptation and ones
we haven’t figured out—we’re not even close to figuring out how to optimize yet—would
be the ideal scenario for a lab, because, you know, both parties win by sorting through
the heterogeneity and figuring out how to adapt, and why they need to adapt, and so
on. AMY: Yeah, I think that’s a great point. I would also add, too, that maybe, this could
be an opportunity for some of these sequential, multiple assignment, randomized trial or adaptive
designs, where you embrace the heterogeneity and you deliberately take advantage of the
fact that in some places you’re going to have early adopters that are just gung-ho and wanting
to do your thing. And then, at the same time, you’re going to
have places that struggle and often times struggle, because of contextual factors that
need to be addressed anyway. And so, using the opportunity of those heterogeneity
and smart designs to test different implementation strategies, especially more intensive ones
that might be able to do more deep dive into the contextual issues. SIMON: So, just to spell out the sort of undercurrent
there. I don’t think a laboratory means a single
kind of setting, and I think many of us do have relationships with single kinds of settings. I know I have in, you know, over the long
term, but I think inherently this comes back to the point that Dr. Emmons made yesterday
about trying to capitalize on variability. You want a laboratory to have heterogeneity. So, I think within that, it sort of begs the
question, “Oh, I’m not supposed use to that phrase, it’s never right.” We need to think about what kind, what is
the scope and span of a laboratory, of a community, of a collaboratory? Are we talking about regional or national
in order to capitalize on the kind of heterogeneity is of settings? Some of which may have more or less capacity
to do the work that we want to do. But I don’t think, you know, to Brian’s question,
that it inherently assumes that we have one kind of participating site within a laboratory. The one thing I would pitch, though, is that
I think particularly around the, what came up for me and thinking about the IT talks
elements this morning. I think we have a lot of—we’re scientists,
we have a data bias, so more data is good and the assumption is the data that comes
easily, is the best kind of data that we want, because easier data is cheaper—and so then
we’re biased towards the idea that, for example, you have an enterprise-wide interoperable
electronic health system. So, I want to come back to the idea that the
heterogeneity of your lab has to account for the fact that most clinics don’t have systems
that are designed to talk to other clinics’ systems. We want that as research. We aspire for that for—from a consumer perspective,
but we need labs that also represent those partners that don’t have what makes life easier
for researchers, which is, for example, enterprise-wide ethic. Setting aside the challenges of ethic. So, I think really thinking about how to create
a lab that deliberately does not have the capacity to provide you with the data easily,
is a fundamentally important implementation science question, and I’m worried we’re going
to risk overlooking it. DAVID: Yeah. And I think something that seemed like it
actually cut across three or four of the different group presentations, just like, I heard in
the big data one. I heard an equity side of thing. Okay. I saw one. RINAD: Rinad Bedias from Penn. Thank you so much for a great talk and panel. This is so interesting. I’ve been sitting here thinking a little bit
about the term “implementation lab,” and kind of this conceptual issue. I think, in implementation science, we have
a lot of terms. And it sounds a lot to me, like, learning
health system, implementation lab, community partnered work, practice-based research networks,
and I’m wondering if you all could reflect on, kind of what the thoughts are, and, you
know, having a new term. And is this going to be the term or how we’re
going to incorporate all these other terms for something that seems similar. DAVID: Any thoughts? Or is that for me? For me first. Yeah. You know, we do. Right? We continue to invent concepts or reinvent
concepts or change the names at various points. I mean, I think we likely, as has been said
so far, have to be thoughtful about how we’re communicating and to whom we’re communicating. Because I think it makes sense that laboratory—to,
in some instances—helps to explain to folks who are familiar with the concept of a laboratory
in certain places, but wouldn’t be, what is your laboratory if you’re studying implementation? So, in that sense, I think there’s a useful
analogy or, you know, a bridge that one—for others. Yes. If the, if the notion of laboratory is oh,
I am now, it was said before, a guinea pig, I am the one being studied. I don’t have agency in terms of the questions
that—the supports that I need. It’s just I am at someone else—no mean,
I, that’s definitely what we don’t want. It’s probably going to be the case that we
won’t find, this would be my guess, that we won’t find a single term that transcends all
of the different perspectives. I think we’re probably going to have to be
a little bit fluid and flexible. I would hope that if we get the concept, the
things underlying whatever label, and in the same way that we have tons of definitions
for dissemination, and diffusion, and implementation, and translation, all of those things, you
know, across countries, across different communities. Ideally, if we’re explaining what we mean,
and we may have to spend a little bit of extra time to make sure that we are conveying what
we mean, rather than leaving it to, “I’m doing a lab,” and potentially turn off 75%
of people I’m talking to right at once. I don’t know if others have thoughts. SIMON: Well, I think you’re right, but, you
know, but words are important, but words go with deeds, right? You’ve got to walk and talk and then you
have to lead by example. And I think this comes back to what Melissa
was arguing. It’s like, whether that’s embedded in the
FOA, or from the perspective of your PI and PI structure, you want to build the governance
into whatever this entity is going to be that shows what partnership and collaboration looks
like. And not just in the short term, but over the
long term, to be able to flex and respond to the changes in the needs that your community
partners want, you know, much as Query has been able to do. MELISSA: I appreciate that. And perhaps there’s like, a word that you
can put in between the words “implementation” and “lab,” like “learning” or something
that softens it up, and again levels the playing ground a little bit, understanding there has
to be humility at the core of every, every one of these things. DAVID: Yeah. So recognizing everyone’s okay. ATTENDEE: All right, so, I wanted to thank
you for all the discussions. Very informative. I wanted you guys to think about resource-limited
settings. What would this look like, and what would
they need to do for foundation in setting it up? Because here, it might be easy to say, engage
partners who already are funded. But how do you do that in such settings? Thank you. DAVID: I actually think that that’s a wonderful
question for us all to take on. I think that it relates to this question,
which will remain open as, we think about different settings, different communities,
etc., all of your advice and ongoing, and obviously we’re spotlighting this because
we need your help. We need to sort this out conceptually, mechanically
as well, because it is something that we see will help us be able to scale up some of the
efforts in implementation science, because we’re not constantly having to identify new
settings for that. So, please continue, we’ll keep that open,
continue to add and think about the range of different settings of populations that
we really want to collaborate with. So, thanks very much to our panel. [Event Concludes]

You May Also Like

About the Author: Oren Garnes

Leave a Reply

Your email address will not be published. Required fields are marked *