Research is complicated

I’ve been away the last few days and trying to relax.  Now that I’m catching up, I see Austin Frakt and Avik Roy are having a slight disagreement as to the association of Medicaid on outcomes.  Long story short, Avik points to the literature that shows that Medicaid is associated with worse surgical outcomes.  Austin asks the important follow-up question: if you believe that’s true, are you suggesting that we make those on Medicaid more like the uninsured?  I don’t want to wade into this too deep.  I will make a number of small points:

-Insurance doesn’t equal care.  Insurance can affect how likely you are to get care and how quickly you might get it.  But any study that looks at insurance has to adjust for many, many other variables in order to get the true effect of insurance.

-There is a large body of literature out there on insurance.  A lot of it shows that people with private insurance do better than those with public insurance or those without insurance; that should not be a surprise.  Most people (and most of your docs) would rather have private insurance than Medicaid.  But would you really rather have no insurance than Medicaid?  If so, that is everyone’s right.  Don’t get the Medicaid.  I wager few would make that choice.

-I find it interesting that most of the literature that Avik cites is about surgery.  Surgery is different than other types of care (like emergency care) in that it is harder to refuse.  So it may be that the uninsured are getting care on a compassionate basis.  Few would provide a screening mammogram or yearly colonoscopy to someone uninsured, however, and you would get that with Medicaid.

On the whole, I think the debate is healthy and good.  No one is claiming that Medicaid is perfect, or that we should all just get Medicaid.  There is always room for improvement.  I also don’t necessarily think that Avik is arguing that we should just dump all the Medicaid people on the street, which is what I think some (not Austin) are implying he is saying.

And I’m stopping there.  Were I on the radio, I would be happy to debate this.  But blogging is too asynchronous when I’m joining in so late.  Except for one thing, and here I’m going to take a tiny issue with Avik’s first post.  It was based on (as far as I can tell) a meeting abstract.

I have a long-standing beef with promoting research that is presented in abstract form at scientific meetings.  It makes for great press and lots of splash, but I think it’s a real problem.  So much so that I have refused to participate in media events or press releases that about my work unless it has already appeared in the peer-reviewed literature.  Why?  It’s not rigorously reviewed.  Here is the total amount that we are able to know about the methods of the study Avik cites:

Methods: From 2003-2007, 893,658 major surgical operations were evaluated using the Nationwide Inpatient Sample (NIS) database: lung resection, esophagectomy, colectomy, pancreatectomy, gastrectomy, abdominal aortic aneurysm repair, hip replacement, and coronary artery bypass. Patients were stratified by primary payer status: Medicare (n=491,829), Medicaid (n=40,259), Private Insurance (n=337,535), and Uninsured (n=24,035). Multivariate regression models were applied to assess outcomes.

That’s it.  Was it a good study?  Valid?  How can you tell?

This ticked me off so much as a fellow, that I actually studied it.  Specifically, we looked at abstracts presented at the Pediatric Academic Societies meeting, which is the largest pediatric research meeting.  You can read the full paper, but here’s the abstract:

OBJECTIVE: The validity of research presented at scientific meetings continues to be a concern. Presentations are chosen on the basis of submitted abstracts, which may not contain sufficient information to assess the validity of the research. The objective of this study was to determine 1) the proportion of abstracts presented at the annual Pediatric Academic Society (PAS) meeting that were ultimately published in peer reviewed journals; 2) whether the presentation format of abstracts at the meeting predicts subsequent full publication; and whether the presentation format was related to 3) the time to full publication or 4) the impact factor of the journal in which research is subsequently published.

METHODS: We assembled a list of all abstracts submitted to the PAS meetings in general pediatrics categories in 1998 and 1999, using both CD-ROM and journal publications. In each year, we chose up to 80 abstracts from each presentation format (“publish only,” “poster,” “poster symposium,” “platform presentation”). We chose either 1) all abstracts in each format or 2) when there were >80 abstracts, a random selection of 80 of them. We assessed each selected abstract for subsequent full publication by searching Medline in March 2003; if published, then we recorded the journal, month, and year of publication. We used logistic and linear regression to determine whether publication, time to publication, and the journal’s impact factor were associated with the abstract’s presentation format.

RESULTS: Overall, 44.6% of abstracts presented at the PAS meeting achieved subsequent full publication within 4 to 5 years. There were significant differences between the rates of subsequent full publication of abstracts submitted but not chosen for presentation at the meeting (22.2%) and those that were chosen for presentation in poster sessions (40.0%), poster symposia (44.1%), and platform presentations (53.8%). There were no meaningful differences between the presentation formats in their mean time to publication and their mean journal impact factor.

CONCLUSIONS: PAS meeting attendees and the press should be cautious when interpreting the presentation format of an abstract as a predictor of either its subsequent publication in a peer-reviewed journal or the impact factor of the journal in which it will appear.

I know that can be overwhelming, so here’s the gist.  We looked at a sample of all abstracts sent in to the meeting, and whether they were ever published in peer-review journals.  The first thing I always remind people is that 87% of abstracts that were sent in were presented.  That’s a lot; very few were refused.  So I wouldn’t necessarily assume that just because an abstract is presented, it’s totally valid.  Second, less than 45%of the research presented was published in a peer-reviewed journal in the next four to five years.  So over half of what was presented at the meeting never was “really” published.

I’m not saying the results of Avik’s discussed study aren’t valid.  I’m saying I can’t tell.  And neither can you, without more information.  The peer review for a meeting just isn’t the same as for full publication.  You have less time, different criteria, and almost nothing by which to judge the work.  Ideally, meetings would stop publicizing abstracts as if they were full studies, but neither they, nor the press, seem likely to do so.

Hidden information below


Email Address*