Friday, August 30, 2013

Do Physicians Spend Too Much Time With Computers?

A recent study of work hours of medical interns in the new era of duty hour regulations produced an interesting side finding, which is that modern medical interns spend about 40% of their time at a computer [1]. To some, this prompted concern that computers were drawing medical trainees away from patients and their care.

A finding like this certainly warrants attention. However, I wonder whether many expressing concern are asking the wrong question. The proper question is not whether this is too much time at a computer, but rather if this amount of time compromises the interns' care of their patients or of their learning experience.

Implicit among those who raise the question of too much time with computers is the assumption that computers are taking physicians away from patients. It is instructive, however, to consider historic data of how much time physicians spend in direct vs. indirect care of patients. It turns out that physicians have historically spent most of their working time in activities other than in the presence of their patients.

Time studies of hospital [2-6] and emergency [7] physicians show physicians spend about 15-38% of their time in direct patient care versus 50-67% of their time in indirect patient care, divided among reviewing results, performing documentation, and engaging in communication. Likewise, studies of outpatient physicians find that 14-39% of work takes place outside the exam room [8-9]. In addition, work related to patients when they are not even present at the hospital or office consumes 15-23% of the physician work day [9-11].

Therefore, this new study does not necessarily indicate the computers are drawing physicians away from patients. It is difficult to compare the proportions of these interns' time devoted to direct and indirect care with those of other physicians who have completed their training. However, it is worthy to note that the four-fold ratio of indirect to direct care is not too far off what was document for practicing physicians in the other studies. Interns have different time demands anyways, not only working longer hours but also devoting more time to educational activities.

Furthermore, we also have to consider the premise that there good reason for spending more time in front of computers, as some evidence supports the notion that there may be value to patients as well as clinician education in having access to knowledge in unprecedented ways that previous generations of physicians did not have [12-13]. There is no question that we must pay more attention to physician workflow with computers so that they are not unduly wasting time, especially time that could be better spent with patients. But we must also consider the benefits of computers, and try to determine the most appropriate amount of time for physicians to spend using them during their working hours.

References

1. Block, L, Habicht, R, et al. (2013). In the wake of the 2003 and 2011 duty hours regulations, how do internal medicine interns spend their time? Journal of General Internal Medicine. 28: 1042-1047.
2. Ammenwerth, E and Spötl, HP (2009). The time needed for clinical documentation versus direct patient care. A work-sampling analysis of physicians' activities. Methods of Information in Medicine. 48: 84-91.
3. Tipping, MD, Forth, VE, et al. (2010). Systematic review of time studies evaluating physicians in the hospital setting. Journal of Hospital Medicine. 5: 353-359.
4. Tipping, MD, Forth, VE, et al. (2010). Where did the day go?--a time-motion study of hospitalists. Journal of Hospital Medicine. 5: 323-328.
5. Kim, CS, Lovejoy, W, et al. (2010). Hospitalist time usage and cyclicality: opportunities to improve efficiency. Journal of Hospital Medicine. 5: 329-334.
6. Yousefi, V (2011). How Canadian hospitalists spend their time - a work-sampling study within a hospital medicine program in Ontario. Journal of Clinical Outcomes Management. 18: 159-164.
7. Chisholm, CD, Weaver, CS, et al. (2011). A task analysis of emergency physician activities in academic and community settings. Annals of Emergency Medicine. 18: 117-122.
8. Gilchrist, V, McCord, G, et al. (2005). Physician activities during time out of the examination room. Annals of Family Medicine. 3: 494-499.
9. Gottschalk, A and Flocke, SA (2005). Time spent in face-to-face patient care and work outside the examination room. Annals of Family Medicine. 3: 488-493.
10, Farber, J, Siu, A, et al. (2007). How much time do physicians spend providing care outside of office visits? Annals of Internal Medicine. 147: 693-698.
11. Chen, MA, Hollenberg, JP, et al. (2010). Patient care outside of office visits: a primary care physician time study. Journal of General Internal Medicine. 26: 58-63.
12. Buntin, MB, Burke, MF, et al. (2011). The benefits of health information technology: a review of the recent literature shows predominantly positive results. Health Affairs. 30: 464-471.
13. McCoy, AB, Wright, A, et al. (2013). State of the art in clinical informatics: evidence and examples. Yearbook of Medical Informatics. 8: 13-19.

Sunday, August 18, 2013

ACGME Releases Draft Clinical Informatics Fellowship Program Requirements For Public Comment

As with all medical subspecialties, the new clinical informatics (CI) subspecialty will need to develop fellowship training programs for those seeking to enter the field. After the first five years of the subspecialty (which starts in 2013), during which the training requirements to be eligible to sit for the certification exam can be met by the "practice pathway" or a “non-traditional fellowship" (i.e., "grandfathering"), starting in 2018 the only way to become certified will be through a fellowship accredited by the Accreditation Council for Graduate Medical Education (ACGME).

On July 29, 2013, the ACGME released a draft program requirements document and opened up a 45-day comment period for public feedback (with comments due September 11, 2013). This posting provides a summary of the 26-page document. In a subsequent post, I will provide the comments that the Oregon Health & Science University (OHSU) biomedical informatics program submits to ACGME.

As with most training requirements documents, there is boilerplate (required of all specialties, in bold text style) and specialty-specific text (plain text style).

All CI programs will need to be administratively integrated with one of six specialties: Anesthesiology, Emergency Medicine, Medical Genetics, Pathology, Pediatrics, or Preventive Medicine. This does not mean that a program needs to be focused in one of these specialties; it only means that it must be administered by one of them. CI programs will not have their own residency review committees (RRCs), but instead will be reviewed by RRCs from these six specialties. Physicians from other specialties can enroll in any of these programs.

Programs will be required to be of 24 months duration, with the fellow having completed the program within 48 months.

There must be a single program director who is board-certified in CI or a subspecialty acceptable to the RRC. There are substantial administrative responsibilities for the director. He or she must also have five years experience working in CI. There must be two additional faculty members, with the three faculty collectively devoting at least two FTE to administration, supervision, and teaching. There must also be a program coordinator to provide administrative support.

In addition to resources for education, the program must have a "clinical information system" that contains health and wellness data, includes clinical decision support, and is accessible in all (inpatient and outpatient) healthcare settings.

The program must of course have an educational program that has clear competency-based goals that are distributed to faculty and fellows at least annually. There must be regularly scheduled didactic sessions. The document is not specific as to the content of the educational program, but of course the content must as a minimum prepare the student to pass the CI subspecialty board exam.

The educational competencies that the program must follow are based on the six ACGME core competencies, with some additional learning objectives specific to CI. The ACGME competencies and some of CI extensions include:
  1. Patient care and procedural skills – leverage information and communication technology across the dimensions of healthcare to improve processes and outcomes
  2. Medical knowledge – demonstrate knowledge of informatics theory and practice
  3. Practice-based learning and improvement – develop skills and habits for self-evaluation and life-long learning
  4. Interpersonal and communication skills – communicate effectively, including serving as a liaison between information technology professionals, administrators, and clinicians
  5. Professionalism – demonstrate all aspects of professionalism, including the ability to recognize the causes and consequences of security breaches and to show sensitivity to the impact of information system changes
  6. Systems-based practice – in addition to understanding the operations of the healthcare system, be able to recognize and disclose the role of oneself and systems in medical error as well as identify and improve the impact of systems on clinical care
Programs must be evaluated at the level of the fellow, the faculty, and the whole program, to be done via:
  • Clinical Competency Committee of three program faculty to evaluate fellows semi-annually
  • Faculty evaluations to be done at least annually
  • Program Evaluation Committee of two faculty and one fellow for ongoing evaluation of program
In addition, the document states standard requirements for duty hours, supervision, moonlighting, mandatory time free of duty, and maximum duty period length. Clinical work may be performed in the fellow’s primary specialty.

(Thanks to Ben Munger of University of Arizona for reviewing this summary and providing feedback. All errors, however, are my responsibility.)

Thursday, August 8, 2013

We Can Learn About the Difficulties of Healthcare Reform from the Health Problems of Former Presidents

One of mantras of those who oppose healthcare reform is that it will deny people needed care. Programs that require measurement of healthcare quality or aim to discourage overuse of care are viewed by some as efforts to deny Americans their rightful access to healthcare. Consumers want "choice" to get the care they believe they need, and much of the healthcare system is well-configured with the financial incentive to meet that need.

This week, former President George W. Bush underwent a coronary stent procedure to open up a 70% blockage in one of his coronary arteries. The details of his symptoms are unclear but, as noted by an article in Forbes magazine [1], if President Bush was not having cardiac symptoms (e.g., chest pain), then there is no scientific evidence that the stenting procedure he underwent will prevent a future heart attack or prolong his life compared to just using optimal medical therapy (e.g., treatment of hypertension, hyperlipidemia, etc.) [2].

President Bush is not the only former President to have had possibly suboptimal healthcare for heart disease, which is still the top killer of Americans. President Bill Clinton also had heart problems, although he was acutely symptomatic and required urgent treatment. President Clinton was in suburban New York at the time and went to the nearest emergency department. This hospital had a referral arrangement with Columbia-Presbyterian Medical Center (CPMC) in New York City, where the former President was transferred. It turns out that CPMC, as great an academic medical center as it is, had poor performance on a number of quality measures in the New York State Cardiac Surgery Reporting System, a system whose data has been shown to be associated with beneficial clinical outcomes [3]. Even worse, the surgeon who operated on the former President had a worse-than-average rate of complications.

President Clinton did suffer a complication, and we cannot know for sure whether the complication was a result of the poorer quality care provided by his hospital or surgeon. But as noted in an article in Slate [4], this does raise questions as to the limits of consumer-driven healthcare. If two former Presidents, who presumably have more access to resources and information than anyone else on the planet, cannot make optimal healthcare decisions, can we expect the average consumer to do so? Of course, cardiac disease is one of those conditions for which we have more studies and more quality data than almost any other, and it gets worse from there.

It is unfortunate that a combination of politics and financial self-interest have created a climate of equating any attempt to rein in unnecessary healthcare as "denying" someone care. Focus groups of consumers show there is widespread skepticism, based on misunderstanding, of terms like "quality guidelines" and "evidence-based care" [5,6]. Efforts to have Medicare reimburse physicians for consultation about end-of-life care become "death panels" [7], the Politifact lie of the year for 2009. Efforts to be more appropriately evidence-based about the use of mammography in younger women were viewed as evidence of government malfeasance, when it reality more effective mammography would save the government money if they it led to improved treatment outcomes for breast cancer [8].

I certainly support patient engagement in healthcare decisions. I applaud the "Choosing Wisely" initiative of leading medical societies to highlight care that it is ineffective or outright dangerous [9]. But I remain, like many, frustrated that our political landscape and healthcare financing system impede a forthright discussion of the facts.

References

1. Husten, L (2013). Did George W. Bush Really Need A Stent? Forbes, August 6, 2013. http://www.forbes.com/sites/larryhusten/2013/08/06/questions-about-president-george-w-bushs-stent/
2. Stergiopoulos, K and Brown, DL (2012). Initial coronary stent implantation with medical therapy vs. medical therapy alone for stable coronary artery disease: meta-analysis of randomized controlled trials. Archives of Internal Medicine. 172: 312-319.
3. Jha, AK and Epstein, AM (2006). The predictive accuracy of the New York State coronary artery bypass surgery report-card system. Health Affairs. 25: 844-855.
4. Sanghavi, D (2009). Talk to the Invisible Hand - The promises and perils of treating patients more like consumers. Slate, September 28, 2009. http://www.slate.com/articles/news_and_politics/prescriptions/2009/09/talk_to_the_invisible_hand.html
5. Carman, KL, Maurer, M, et al. (2010). Evidence that consumers are skeptical about evidence-based health care. Health Affairs. 29: 1400-1406.
6. Ross, M, Igus, T, et al. (2009). From our lips to whose ears? Consumer reaction to our current health care dialect. Permanente Journal. 13(1): 8-16.
7. Nyhan, B, Reifler, J, et al. (2013). The hazards of correcting myths about health care reform. Medical Care. 51: 127-132.
8. Quanstrum, KH and Hayward, RA (2010). Lessons from the mammography wars. New England Journal of Medicine. 363: 1076-1079.
9. Cassel, CK and Guest, JA (2012). Choosing wisely: helping physicians and patients make smart decisions about their care. Journal of the American Medical Association. 307: 1801-1802.