Friday, February 25, 2011
The February 2011 issue of Health Affairs published an article criticizing the Food and Drug Administration (FDA)’s humanitarian device exemption (HDE) for deep-brain stimulation in patients with treatment-resistant obsessive-compulsive disorder (OCD). According to FDA regulations, HDEs are granted to devices that are intended for conditions affecting fewer than 4,000 patients annually.
This article, using information from the National Institute of Mental Health (NIMH), claims that there are approximately a half-million people suffering from this form of OCD, more than are allowed if a treatment device is to qualify for an HDE, and enough to do the appropriate clinical trials to approve the device through the premarket approval (PMA) process.
The information in the article is compelling, and the FDA will need to evaluate the allegation and decide whether any modifications need to be made. In the meantime, though, what are the institutional review boards (IRBs) supposed to do?
Multiple IRBs have reviewed and approved this device as a humanitarian use device (HUD). Patients (remember, this isn’t research) have gone through invasive surgery of the brain to receive this device. If an IRB determines that the approval needs to be revisited, what are their options? The IRB could determine that the HDE does not apply to the condition and disapprove the device as an HUD.
However, there are no open clinical trials for this device in treatment-resistant OCD, and it is unclear if Medtronic (the device’s manufacturer) will open one. Disapproval will effectively eliminate the ability to use the device at an institution. An IRB could approve the device again under the HDE regulations, but is that sound in terms of regulations and ethics? As long as the FDA continues allowing deep-brain stimulation, an IRB can state that the determination concurs with the regulations. A more subjective conclusion surrounds the ethics.
One argument to continue providing the device under the HDE regulations is that it provides treatment to the patient. However, rigorous testing to meet FDA approval criteria for effectiveness has not been done. Is it ethical to say that a device that has little or no effectiveness testing behind it can treat a person? You might as well say that you can be cured by a drug that looks good on paper. Should the manufacturer be forced to withdraw access to the device until structured clinical trials can be performed? Can the information obtained from patients who have already received the device be used to fast-track the PMA application?
Lucky for me, I don’t have to answer these questions. That is left to the FDA and the IRBs. The article’s authors have presented a thorough and well thought out argument that the FDA cannot brush aside. IRBs should not dismiss this paper either, as it raises ethical issues surrounding these invasive devices that have, at the most, minimal effectiveness testing behind them.
Thursday, February 17, 2011
- Do you have a question? Post it and we’ll do our best to answer it.
- Do you need to vent? Let it all out, and we will provide an encouraging word.
- Are you in search for some for some mock test questions?
We will be posting a mock test question or two each day to stimulate conversation and discuss potential answers. Here is the first question that I posted on the page:
45 CFR 46 applies in which of the following circumstance:
a. DHHS is conducting or sponsoring research with human subjects
b. Industry is conducting or sponsoring research with human subjects
c. The research involves human subjects
d. An investigational device or drug is undergoing testing in human subjects
Monday, February 14, 2011
In my world, January is metric month.
I spent last month analyzing data from the 2010 calendar year, calculating statistics, and overlaying the information with our workflow. The result: glowing data that shows we have improved our process since last year (insert cheer here).
Inevitably, when I present this data, I will get asked the following question by my director, the VP for research, my staff, and researchers: “How do we compare to our peer institutions?” My response is always, “Great question.”
As I learned at the 2010 Advancing Ethical Research Conference, many institutions are also trying to determine how their metrics compare to those of their peers. While every human research protections program (HRPP) is interested in the same information, such as how long it takes to approve a project or how many submissions a program receives, each institution has its own way of calculating this information. It’s therefore difficult to make direct comparisons among institutions.
For instance, is the approval date the day that the IRB approved the protocol, or the date that all the conditions were met? Do you use calendar days or business days? Do you include all days, or do you exclude days that information was pending from the investigator?
I attended many quality assurance and metric sessions at the 2010 AER Conference, and met lots of people interested in quantitatively calculating success and identifying bottlenecks. During the conference, representatives from institutions that review their own research expressed frustration with metrics, as did larger organizations such as Accreditation of Human Research Protection Programs (AAHRPP) and Western IRB. It seems that everyone is waiting for nationally accepted standards, and instructions on how to calculate them.
Without robust quantitative information, it is difficult to do research on the HRPP review process. Do faster review times lead to greater IRB non-compliance with federal regulations? That is hard to answer without standardized data.
Accepted standards and calculation methods would be invaluable to institutions interested not only in comparing how they are doing in the field of regulatory process, but also in identifying colleagues to partner with, as well as best practices that they can apply to their own programs. I plan on scouring other institutions for their HRPP metrics and calling them to discuss their calculations. I will be looking at our data in comparison to the AAHRPP metrics released last year, and to the Western IRB information I received at AER.
I will be posting our metrics on my institution’s website, and am happy to discuss how I calculated these statistics with anyone interested (you can get my contact information here). Using the PRIM&R connection and Ampersand, we can develop these metrics that will only strengthen the scientific data supporting the regulatory review process.
Thursday, February 10, 2011
February 1, 2011
Medical detectives find their first new disease: Doctors at the NIH are working on solving unanswered questions about a new disease.
What’s a little swine flu outbreak among friends?: A study finds that kids who catch the swine flu are more likely to contract it from friends than classmates.
Pneumonia DNA morphs to dodge vaccines: Learn how a single strand of pneumonia has grown over the past 30 years.
Childhood: obesity and school lunches: Researchers say another factor in childhood obesity may be school lunches.
February 8, 2011
Social scientist sees bias within: Does Psychology attract a more liberal population?
On evolution, biology teachers stray from lesson plan: A recent study finds that 28% of biology teachers consistently follow the recommendations of the National Research Council to describe evolution.
The matriarch of modern cancer genetics: Read an interview with Janet Davison Rowley, a premiere researcher in modern cancer genetics.
Drugmakers’ fever for the power of RNA interference has cooled: The development of new drugs using RNA interference may not be as promising as once thought.
Lymph node study shakes pillar of breast cancer care: A new study shows that a routine procedure in breast cancer care may not be necessary.
Wednesday, February 9, 2011
In November 2010, the Department of Health and Human Services (DHHS) Office for Human Research Protections (OHRP) published new continuing review guidance for institutional review boards (IRBs). The new guidance (roughly 40+ pages) is more detailed, and gives IRBs an opportunity to use the new information to fit their institutional needs, as long as they satisfy the regulatory requirements set forth in 45 CFR 46.
It is important to note that the recommendations are directed toward research that is conducted or supported by Health and Human Services (HHS). However, many institutions apply the guidance to other research as well. The guidance indicates that continuing review should be similar to the initial review, applying the same guidelines throughout the process. However, we also read that:
When conducting continuing review, the IRB should start with the working presumption that the research, as previously approved, does satisfy all of the above criteria. The IRB should focus on whether there is any new information provided by the investigator, or otherwise available to the IRB, that would alter the IRB’s prior determinations, particularly with respect to the IRB’s prior evaluation of the potential benefits or risks to the subjects.
Thus, during continuing review, IRBs should determine if any information provided by the researcher will alter the initial assessment. In addition, OHRP guidance states that IRB reviewers should pay particular attention to the following four aspects of the research:
- Risk assessment and monitoring;
- Adequacy of the process for obtaining informed consent;
- Investigator and institutional issues; and
- Research progress.
The guidance later states that, "an IRB administrator or staff member who is also an experienced member of the IRB may be designated by the IRB chairperson to conduct continuing review of research under an expedited review procedure." So it's not prohibited for staff to review continuing review submissions, but OHRP wants to ensure that the person is a member of the IRB and has the qualifications to conduct the review. Thus, if you have IRB staff that are not IRB members, but reviewing continuing review submissions, you will want to discuss this information with your institution to determine the next steps.
I have found the information in the new guidance to be helpful in assessing our continuing review process. We will be assessing our procedures in the near future to ensure that we're aligned with these recommendations. Will your institution be using this guidance, as well?
Wednesday, February 2, 2011
A few weeks ago, I had the unfortunate experience of becoming a crime victim. Upon arriving home from work, I discovered that someone had broken into my home during broad daylight and rifled through the belongings that I share with my wife and newborn son.
Since discovering the break-in, I’ve felt a couple of emotions. After the initial blast of fear, I was simply thankful. No one was home, and thus no one was hurt. The thieves didn’t take much and were relatively neat while they did their business. It could have been so much worse. However, over time I started feeling angry. It took me a week to rationalize why I was angry and once I formulated the sentence, I couldn’t believe the explanation was so simple and paralleled so closely to our field of research.
They entered my home without my consent.
I think that we all could agree that breaking into someone’s home and taking what you want is a criminal and amoral act. But what happens if we change a few words in the above sentence? What if I replaced the word "entered" with the word "took,"and placed "tissue" in lieu of "home" before adding the phrase "for the purposes of medical research"?
They took my tissue, for the purpose of medical research, without my consent.
Is this a criminal act? Is this amoral? What about taking my data without my consent, or my taking my discarded tissue without consent? Are any of these scenarios criminal or lacking morality? Why are some legal, and some not? If it’s legal, is it still moral? Things get complicated fairly quickly.
Many people may be familiar with a book by Robert Fulghum entitled All I Really Need to Know I Learned In Kindergarten where he enumerates the life lessons he learned in kindergarten. Between "clean up your own mess," and "say you’re sorry when you hurt somebody," lies the rule "don’t take things that aren’t yours." Wouldn’t it be nice if this was the standard across every institution and agency in the field?
Having read The Immortal Life of Henrietta Lacks, I have no doubt that scientist George Gey had good intent when he took Henrietta’s tissue sample and without her knowledge grew and distributed HeLa cells. Similarly, I have no doubt Robin Hood had good intentions when he stole from the rich and gave to the poor. Despite their intentions, one could argue that they both took something that was not theirs and, thus, instead of becoming a great scientist or a great humanitarian, both simply became thieves.Research subjects should never feel victimized by researchers. They can’t feel like the researcher has violated their figurative home and stolen their belongings, because the researcher didn’t inform the subject accurately. Subjects deserve better. They deserve our honesty. They deserve our morality. They deserve full and thorough informed consent.
It takes time. It takes education. It takes patience. It takes the ability to listen.Failure to do any of the above is almost criminal.
It’s a crime not worthy of researchers, merely thieves.