Tuesday, October 17, 2017

The Still-Incomplete Answering of Questions About Physician Time With Computers

Another couple of studies have been published documenting the amount of time physicians spend with computers in primary care [1] and ophthalmology [2] clinics. Clearly these and other recent studies [3,4] show that physicians spend too much time with the electronic health record (EHR), especially when phrases like “pajama time” enter into the vernacular to refer to documentation that must take place after work at home because it could not be completed during the day.

But one aspect of these studies that has always concerned me is that there is no measure of what is the appropriate amount of time for physicians to spend not in the presence of the patient. This includes tasks like reviewing data that will help inform making current decisions as well as entering data that other team members caring for the patient will use to inform their decision-making. While some dispute the value of our current approaches to measurement of quality of care delivered [5], I believe that most physicians accept there should be some measure of accountability for their decisions, especially given the high cost of care. This means that some time and effort must be devoted by physicians to measuring and improving the quality of care that they deliver.

The newest time-motion study from primary care once again reiterates the large amount of time that the EHR consumes of the physician day [1]. In this study, that time was found to be 5.9 hours of an 11.4-hour workday and 1.4 hours after hours. But if we look at the tasks on which this time was spent (Table 3 of the paper), we cannot deny that just about all of them are important to overall patient care, even if too much time is spent on them. Do we not want physicians to have some time for reviewing results, following up with patients, looking at their larger practice, etc.?

I have noted in the past that physicians have always spent a good deal of time not in the presence of patients. I have cited studies of this that even pre-date the computer era, but someone recently pointed me to an even older study from 1973 [6]. In this study of physicians in a general medicine clinic, 103 physicians were found to spend 37.8% of their time charting, 5.3% consulting, 1.7% in other activities, and the remaining 55.2% of time with the patient. So even in the 1970s, ambulatory physicians spent only slightly more than half of their time in the presence of patients. As one who started his medical training in that era, I can certainly remember time spent trying to decipher unreadable hand-writing as well as trying to track down paper charts and other missing information. I also remember caring for patients with no information except for what the patient could recollect.

Clearly we have a great deal of work to do to make our current EHRs better, especially in streamlining both data entry and retrieval. We also need to be careful not to equate measures like clicks and screens with performance, as a study from our institution found that those who efficiently navigated the most information in the record achieved the best results in a simulation task [7]. What we really need is studies that measure time taken for information-related activities in physician practice and determine which are most important to optimal patient care. Further research must also be done to optimize usability and workflow, including determining when other members of the team can contribute to overall efficiency of the care process.

References

1. Arndt, BG, Beasley, JW, et al. (2017). Tethered to the EHR: primary care physician workload assessment using ehr event log data and time-motion observations. Annals of Family Medicine. 15: 419-426.
2. Read-Brown, S, Hribar, MR, et al. (2017). Time requirements for electronic health record use in an academic ophthalmology center. JAMA Ophthalmology. Epub ahead of print.
3. Sinsky, C, Colligan, L, et al. (2016). Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Annals of Internal Medicine. 165: 753-760.
4. Tai-Seale, M, Olson, CW, et al. (2017). Electronic health record logs indicate that physicians split time evenly between seeing patients and desktop medicine. Health Affairs. 36: 655-662.
5. Marcotte, BJ, Fildes, AG, et al. (2017). U.S. Health Care Reform Can’t Wait for Quality Measures to Be Perfect. Harvard Business Review, October 4, 2017. https://hbr.org/2017/10/u-s-health-care-reform-cant-wait-for-quality-measures-to-be-perfect.
6. Mamlin, JJ and Baker, DH (1973). Combined time-motion and work sampling study in a general medicine clinic. Medical Care. 11: 449-456.
7. March, CA, Steiger, D, et al. (2013). Use of simulation to assess electronic health record safety in the intensive care unit: a pilot study. BMJ Open. 3: e002549. http://bmjopen.bmj.com/content/3/4/e002549.long.

Tuesday, October 10, 2017

The Resurgence and Limitations of Artificial Intelligence in Medicine

I came of age in the biomedical informatics world in the late 1980s, which was near the end of the first era of artificial intelligence (AI). A good deal of work in what we called medical informatics at that time focused on developing “expert systems” that would aim to mimic, and perhaps someday replace, the cognition of physicians and others in healthcare.

But it was not to be, as excessive hype, stoked with misguided fears about losing out to Japan, led to the dreaded “AI winter.” Fortunately I had chosen to pursue research in information retrieval (search), which of course blossomed in the 1990s with the advent of the World Wide Web. The “decision support” aspect of AI did not go away, but rather was replaced with focused decision support that aimed to augment the cognition of physicians and not replace it.

In recent years, it seemed that the term AI had almost disappeared from the vernacular. My only use of it came in my teaching, where I consider it essential to learning to understand the history of the informatics field.

But now the term is seeing a resurgence in use [1]. Furthermore, modern AI systems take different approaches. Rather than trying to represent the world and create algorithms that operate on those representations, AI has reemerged due to the convergence of large amounts of real-world data, increases in storage and computational capabilities of hardware, and new computation methods, especially in machine learning.

This has given rise to a new generation of applications that again try to outperform human experts in medical diagnosis and treatment recommendations. Most of these successful applications employ machine learning, sometimes so-called “deep learning,” and include:
  • Diagnosing skin lesions – keratinocyte carcinomas vs. benign seborrheic keratoses and malignant melanomas vs. benign nevi [2]
  • Classifying metastatic breast cancer on pathology slide images [3]
  • Predicting longevity from CT imaging [4]
  • Predicting cardiovascular risk factors from retinal fundus photographs [5]
  • Detecting arrhythmias comparable to cardiologists [6]
Unfortunately, the hype is building back too, perhaps exemplified by the IBM Watson system [7]. I recently came across an interesting article by MIT Emeritus Professor Rodney Brooks that put a nice perspective on it and stimulated some of my own thinking [8].

From my perspective, the most interesting part of Brook's piece concerns “performance vs. competence.” He warns that we must not confuse performance on a single task, such as making the diagnosis from an image, with the larger task of competence, such as being a physician. As he states, “People hear that some robot or some AI system has performed some task. They then generalize from that performance to a competence that a person performing the same task could be expected to have. And they apply that generalization to the robot or AI system.”

I have no doubt that algorithmic accomplishments in the above medical examples will be used by physicians in the future, just as they now uses automated interpretation of EKGs and other tests that comers, in part, from earlier AI work. But I have a hard time believing that the practice of medicine will evolve to patients submitting pictures or blood samples to computers to obtain an automated diagnosis and treatment plan. It will be a long time before computers can replace the larger perspective that an experienced physician brings to a patient’s condition, to say nothing of the emotional and other support that goes along with the context of the diagnosis and its treatment. Indeed, the doctors of Star Trek are augmented by automated tools but in the end, still compassionate individuals who diagnose and treat patients.

Somewhat tongue in cheek, I won’t say that machines replacing physicians is impossible, since there is a quote in a different part of the article, attributed to Arthur C. Clarke, aimed at people like myself: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.” As someone who does not consider himself quite yet to be elderly, but is has worked in the field for several decades, I want be careful to not say that something is “impossible.”

But on the other hand, while I am certain that we will see growing numbers of tools to improve the practice of medicine based on machine learning and other analysis of data, it is very difficult for me to see no continued role for the empathetic physician who puts the findings in context and supports in other ways the patient whose diagnosis and treatment are augmented by AI.

References

1. Stockert, J (2017). Artificial intelligence is coming to medicine — don’t be afraid. STAT, August 18, 2017. https://www.statnews.com/2017/08/18/artificial-intelligence-medicine/.
2. Esteva, A, Kuprel, B, et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature. 542: 115-118.
3. Liu, Y, Gadepalli, K, et al. (2017). Detecting cancer metastases on gigapixel pathology images. arXiv.org: arXiv:1703.02442. https://arxiv.org/abs/1703.02442.
4. Oakden-Rayner, L, Carneiro, G, et al. (2017). Precision radiology: predicting longevity using feature engineering and deep learning methods in a radiomics framework. Scientific Reports. 7: 1648. https://www.nature.com/articles/s41598-017-01931-w.
5. Poplin, R, Varadarajan, AV, et al. (2017). Predicting Cardiovascular Risk Factors from Retinal Fundus Photographs using Deep Learning, Arxiv.org. https://arxiv.org/abs/1708.09843.
6. Rajpurkar, P, Hannun, AY, et al. (2017). Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks, Arxiv.org. https://arxiv.org/abs/1707.01836.
7. Ross, C and Swetlit, I (2017). IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close. STAT, September 5, 2017. https://www.statnews.com/2017/09/05/watson-ibm-cancer/.
8. Brooks, R (2017). The Seven Deadly Sins of AI Predictions. MIT Technology Review, October 6, 2017. https://www.technologyreview.com/s/609048/the-seven-deadly-sins-of-ai-predictions/.

Friday, October 6, 2017

HITECH Retrospective: Glass Half-Full or Half-Empty?

Last month, the New England Journal of Medicine published a pair of Perspective pieces about the Health Information Technology for Clinical and Economic Health (HITECH) Act (both available open access). The first was written by the current and three former Directors of the Office of the National Coordinator for Health IT (ONC) [1]. The second was written by two other national thought leaders who also have a wealth of implementation experience [2]. Both papers discuss the accomplishments and challenges, with the Directors’ piece more positive (glass half-full) than the outside thought leaders (glass half-empty).

In the first piece, Washington et al. pointed to the accomplishments of the HITECH era, where we have finally seen digitization of the healthcare industry, one of the last major industries to do so. The funding and other support provided by the HITECH Act have led to near-universal adoption of electronic health records (EHRs) in hospitals and substantial uptake in physician offices. They also point to a substantial body of evidence that supports the functionality required under the “meaningful use” program.

These authors also note the shortcomings of this rapid adoption, when not only the people but also healthcare organizations and even EHR systems were not ready for rapid uptake. They acknowledge that many healthcare providers are frustrated by poor usability and lack of actionable information, which they attribute in part to proprietary standards and information blocking. They advocate moving forward with a push for interoperability, secure and seamless flow to data, engagement of patients, and development of a learning health system.

Halamka and Tripathi, on the other hand, take a somewhat more negative view. While acknowledging the gains in adoption that have occurred under HITECH, they note (my emphasis), “We lost the hearts and minds of clinicians. We overwhelmed them with confusing layers of regulations. We tried to drive cultural change with legislation. We expected interoperability without first building the enabling tools. In a sense, we gave clinicians suboptimal cars, didn’t build roads, and then blamed them for not driving.” They note that the process measures of achieving meaningful use have become an end in themselves, without looking at the larger picture of how to improve quality, safety, and cost of healthcare. They do point a path forward, calling for streamlining of requirements to insure interoperability and a focused set of appropriate quality measures, with EHR certification centered on this as well. They also encourage more market-driven solutions, with government regulation focused on providing incentives and standards for desired outcomes.

Taking more of a glass half-full point of view, I wrote in this blog several months ago that EHR adoption has “failed to translate” the benefits that have been borne out in practical research studies. I noted the success of some institutions, mostly integrated delivery systems, in successfully adoption EHRs, and also persistence in healthcare of the problems that motivate them, such as suboptimal quality and safety of care while costs continue to rise.

A few other recent pieces have painted a path forward. The trade journal Medical Economics interviewed several physician informatics experts to collate their thoughts on what features a highly useful EHR might have, especially in contrast to systems that a majority of physicians complain about today [3]. The set of features does not represent much more than we expect of all of our computer applications these days, but whose availability in EHRs continues to be elusive:
  • Make systems work together – achieve interoperability of data across systems
  • Make it easier and more intuitive – make systems easier to understand and use; reduce cognitive load
  • Add better analytics – add more capability to use data to coordinate and improve care
  • Support high-tech care delivery – be able to engage patients in through video and asynchronous communication
  • Make EHRs smarter – systems anticipate user actions and provide reversible shortcuts
  • Become a virtual assistant – assist the clinician with all aspects of managing the delivery of care
A couple other recent Perspective pieces in the New England Journal of Medicine provide some additional solutions. Two well-known informatics thought leaders from Boston Children’s Hospital lay out the case for an application programming interface (API) approach to the EHR based on standards and interoperability [4]. Although this piece has a different focus than the previous one, there is no question that the data normalization from FHIR Resources, the flexible interfaces that can be developed using SMART, and the ease of developing it all via SMART on FHIR could make those goals achievable.

In the second other piece, a well-known leader in primary care medicine calls for delivering us from the current EHR purgatory [5]. His primary solutions focus on reforming the healthcare payment system, moving toward payment for outcomes and not volume, i.e., value-based care.

I agree with just about all that these authors have to say. While the meaningful use program required some benchmarks to insure the HITECH incentive money was appropriately spent, we are probably beyond the need to continue requiring large numbers of process measures. We need to focus on standards and interoperability that will open the door to doing more with the EHR than just documenting care, such as predictive analytics and research. Continuing to reform our payment system is a must, not only for better EHR usage but also to control cost and improve health of the population.

There is also an important role for clinical informatics professionals and leaders, who must lead the way in righting the problems of the EHR and other information systems in healthcare. I have periodically reached back to a quote of my own after the unveiling of the HITECH Act: “This is a defining moment for the informatics field. Never before has such money and attention been lavished on it. HITECH provides a clear challenge for the field to 'get it right.' It will be interesting to look back on this time in the years ahead and see what worked and did not work. Whatever does happen, it is clear that informatics lives in a HITECH world now.” Informatics does live in this world now, and we must lead the way, not letting perfect get in the way of good, but making EHRs most useful for patients, clinicians, and all other participants in the healthcare system.

References

1. Washington, V, DeSalvo, K, et al. (2017). The HITECH era and the path forward. New England Journal of Medicine. 377: 904-906.
2. Halamka, JD and Tripathi, M (2017). The HITECH Era in Retrospect. New England Journal of Medicine. 377: 907-909.
3. Pratt, MK (2017). Physicians dream up a better EHR. Medical Economics, May 22, 2017. http://medicaleconomics.modernmedicine.com/medical-economics/news/physicians-dream-better-ehr.
4. Mandl, KD and Kohane, IS (2017). A 21st-century health IT system — creating a real-world information economy. New England Journal of Medicine. 376: 1905-1907.
5. Goroll, AH (2017). Emerging from EHR purgatory — moving from process to outcomes. New England Journal of Medicine. 376: 2004-2006.

Sunday, July 9, 2017

Kudos for the Informatics Professor, Winter/Spring 2017

As always, I have had the ongoing opportunity to publish, speak, and otherwise disseminate information about the informatics in the new year since my last “kudos” posting last fall.

One accolade I received was election as an inaugural member of the International Academy of Health Sciences Informatics (IAHSI). Informatics leaders from around the world voted to establish the initial membership of 121 leaders from around the world. I was delighted to be among the inaugural group who will be inducted during the 16th World Congress on Medical and Health Informatics (Medinfo 2017) in Hangzhou, China in August, 2017.

I am also pleased to report on a major accomplishment of the Oregon Health & Science University (OHSU) Biomedical Informatics Graduate Program, of which I am Director, received notice of renewal of its NIH National Library of Medicine (NLM) Training Grant in Biomedical Informatics & Data Science. The grant will provide $3.8 million to fund PhD and postdoc students in the program over the next five years.

During this time I also had the opportunity to publish a chapter in an important new book published by the American Medical Association, which I have already written about (Hersh W, Ehrenfeld J, Clinical Informatics, in Skochelak SE and Hawkins RE (eds.), Health Systems Science, 2017, 105-116).

I also gave a number of talks during this time, including one at the Data Day Health event in Austin, TX on January 15, 2017. The title of my talk was, Big Data Is Not Enough: People and Systems Are Needed to Benefit Health and Biomedicine.

I gave another talk at an interesting conference devoted to the challenges of the electronic health record. The conference, The Patient, the Practitioner, and the Computer, took place in Providence, RI on March 17-19, 2017. The title of my talk was, Talk, Failure to Translate: Why Have Evidence-Based EHR Interventions Not Generalized? This talk laid the groundwork for my subsequent blog posting published in this blog as well as The Health Care Blog.

Finally, I also had the opportunity to lead a couple of webinars. One was for the H3ABioNet Seminars series of the Pan African Bioinformatics Network for H3Africa, which took place on April 18, 2017 and covered the same topic as the Data Day Health talk described above.

The other Webinar, Implementing Clinical Informatics in the MD Curriculum and Beyond, was delivered to the Association of Faculties of Medicine of Canada on June 13, 2017.

Monday, July 3, 2017

Eligibility for the Clinical Informatics Subspecialty: 2017 Update

Some of the most highly viewed posts in this blog have been those on eligibility for the clinical informatics subspecialty for physicians, the first in January, 2013 and updates in June, 2014 and March, 2016. A noteworthy event occurred last November when the "grandfathering" period was extended to 2022.

One of the reasons for these posts has been to use them as a starting point for replying to those who email or otherwise contact me with questions about their own eligibility. After all these years, I still get such emails and inquiries. While the advice in the previous posts is largely still correct, there have been a number of small changes, most notably the extension of the grandfathering period. There are still (only) two boards that qualify physicians for the exam, the American Board of Preventive Medicine (ABPM) and the American Board of Pathology (ABP). ABP handles qualifications for those with Pathology as a primary specialty and ABPM handles those from all other primary specialties. (Kudos to ABPM for finally updating and improving their Web site!)

The official eligibility statement for the subspecialty is essentially unchanged from the beginning of the grandfathering period and is documented on the ABPM and ABP Web sites. As clinical informatics has been designated a subspecialty of all medical specialties, this means that physicians must be board-certified in one of the 23 primary specialties (such as Internal Medicine, Family Medicine, Surgery, Radiology, etc.). Those who have let their primary board specialty lapse or who never had one are not eligible to become board-certified in the subspecialty. They must also have an active and unrestricted medical license in one US state.

For the first ten years of the subspecialty (through 2022), the "practice pathway" or completing a "non-traditional fellowship" (i.e. one not accredited by the Accreditation Council for Graduate Medical Education, or ACGME) will allow physicians to "grandfather" the training requirements, i.e., take the exam without completing a formal fellowship accredited by the ACGME. The practice pathway requires that a physician have "practiced" clinical informatics for a minimum of 25% time for three of the last five years. Time spent in formal informatics education is credited at one-half of practice, meaning that a recent master's degree or other educational program should be sufficient to achieve board eligibility. The non-traditional fellowship allows board eligibility by completing a non-ACGME accredited informatics fellowship, such as one sponsored by the National Library of MedicineVeteran's Administration, or others. The ABPM Web site implies, but does not explicitly state, that a master's degree program will qualify one via this pathway as well. A number of physicians have achieved board eligibility (and subsequent certification) by completing the Master of Biomedical Informatics program we offer at Oregon Health & Science University (OHSU).

As always, I must provide the disclaimer that ABPM and ABP are the ultimate arbiters of eligibility, and anyone who has questions should contact ABPM or ABP. I only interpret their rules.

One bit of advice I always give to any physician who meets the practice pathway qualifications (or can do so by 2022) is to sit for the exam before the end of grandfathering period. After that time, the only way to become certified in the subspecialty will be to complete a two-year, on-site, ACGME-accredited fellowship. While we were excited to be the third program nationally to launch a fellowship at OHSU, it will be a challenge for those who are mid-career, with jobs, family, and/or geographical roots, to up and move to become board-certified.

Starting in 2023, however, the only pathway to board eligibility will be via an ACGME-accredited fellowship. There are now nearly 30 such fellowships. But starting in 2023, board certification for physicians not able to pursue fellowships will become much more difficult. There are many categories of individuals for whom getting certified in the subspecialty after the grandfathering period will be a challenge:
  • Those who are mid-career - I have written in the past that the age range of OHSU online informatics students, including physicians, is spread almost evenly across all ages up to 65. Many physicians transition into informatics during the course of their careers, and not necessarily at the start.
  • Those pursuing research training in informatics, such as an NLM fellowship or, in the case of some of our current students, in an MD/PhD program (and will not finish their residency until after the grandfathering period ends) - Why must these individuals also need to pursue an ACGME-accredited clinical fellowship to be eligible for the board exam given such comparable levels of informatics training, even if it will be somewhat less clinical?
  • Those who already have had long medical training experiences, such as subspecialists with six or more years of training - Would such individuals want to do two additional years of informatics when, as I recently pointed out, it might be an ideal experience for them to overlay informatics and their subspecialty training?
There will be one other option for physicians who are not eligible for board exam, which will be the Advanced Health Informatics Certification (AHIC) being developed by AMIA. This certification, according to current plans, will be available to all practitioners of informatics who have master's degrees in both a health profession and informatics, or a PhD in only informatics. This will also provide a pathway for physicians who are not eligible for the board certification pathway. I am looking forward to AMIA releasing its detailed plans for this certification, not only for these physicians but also other practitioners of informatics.

As I have also stated before, I also hold out hope for the ideal situation for physician-informaticians, which in my opinion will be our own specialty or some other certification process. The work of informatics carried out by physicians is unique and not really dependent on their initial clinical specialty (or lack of one at all). I still believe that robust training is required to be an informatician; I just don't believe it needs to be a two-year, in-residence experience. An online master's degree or something equivalent, with a good deal of experiential learning in real-world settings, should be an option. The lack of these sorts of options will keep many talented physicians from joining the field. Such training would also be consistent with the 21st century knowledge workforce that will involve many career transitions over one's working lifetime.

Friday, May 19, 2017

Failure to Translate: Why Have Evidence-Based EHR Interventions Not Generalized?

The adoption of electronic health records (EHRs) has increased substantially in hospitals and clinician offices in large part due to the “meaningful use” program of the Health Information Technology for Clinical and Economic Health (HITECH) Act. The motivation for increasing EHR use in the HITECH Act was supported by evidence-based interventions for known significant problems in healthcare. In spite of widespread adoption, EHRs have become a significant burden to physicians in terms of time and dissatisfaction with practice. This raises a question as to why EHR interventions have been difficult to generalize across the health care system, despite evidence that they contribute to addressing major challenges in health care.

EHR interventions address known problems in health care of patient safety, quality of care, cost, and accessibility of information. These problems were identified a decade or two ago but still persist. Patient safety problems due to medical errors were brought to light with the publication of the Institute of Medicine report, To Err is Human [1], with recent analyses indicating medical errors are still a problem and may be underestimated [2]. Deficiencies in the quality of medical care delivered was identified almost a decade and a half ago [3] and continues to be a problem [4]. The excess cost of care in the US has been a persistent challenge [5] and continues to the present [6]. A final problem motivating the use of EHRs has been access to patient information that is known to exist but is inaccessible [7], with access now stymied by “information blocking” [8].

These problems motivated initial research on the value of EHRs. One early study found that display of charges during order entry resulted in a 12.7% decrease in total charges and 0.9 days shorter length of stay [9]. Another study found that computerized provider order entry (CPOE) led to nonintercepted serious medication errors decreasing by 55%, from 10.7 events per 1000 patient-days to 4.86 events, with preventable ADEs reduced by 17% [10]. Additional studies of CPOE showed a reduction in redundant laboratory tests [11] and improved prescribing behavior of equally efficacious but less costly medications [12]. Another study found that CPOE increased the use of important “corollary orders” by 25% [13]. Additional studies followed from many institutions that were collated in systematic reviews and built the evidence-based case for EHRs [14-17]. There were some caveats about the evidence base, such as publication bias [18] and the benefits mostly emanating from “health IT leader” institutions that made investments both in EHRs and the personnel and leadership to use them successfully.

Despite the robust evidence base, why have the benefits of EHR adoption failed to generalize now that we have widespread adoption? There are several reasons, some of which emanate from well-intentioned circumvention of the EHR for other purposes. For example, both institutions and payers (including the US government) view the EHR as a tool and modify prioritization of functions for cost reduction. There is also a desire to use the EHR to collect data for quality measurement - which should be done - but not in ways that add substantial burden to the clinician. Additionally, there are the meaningful use regulations, which were implemented to insure that the substantive government investment in EHRs led to their use in clinically important ways but are now criticized as being a distraction for clinicians and vendors.

There are also some less nobly intentioned reasons why the value of EHRs has not generalized. One is “volume-based billing,” or the connection of billing to the volume of documentation, which leads to pernicious documentation practices [19]. Another is financial motivation for revenues of EHR vendors, who may be selling systems that are burdensome to use or not ready for widespread adoption. Much of the early evidence for the benefits of EHRs came from “home grown” systems, most of which have been replaced by commercial EHRs. These commercial EHRs do more than just provide clinical functionality; they redesign the delivery of care, sometimes beneficial but other times not. It thus can take a large expenditure on an EHR infrastructure before any marginal benefit from a particular clinical benefit can be achieved, even if the rationale for that function is evidence-based.

Nonetheless, a number of “health IT leader” institutions have sustained successful EHR use and quality of care, such as Kaiser-Permanente [20], Geisinger [21], and the Veteran’s Health Administration [22]. These institutions are not only integrated delivery systems but also have substantial expertise in clinical informatics. These qualities enable them to prioritize use of IT in the context of patients and practitioners as well as incorporate known best practices from clinical informatics focused on standards, interoperability, usability, workflow, and user engagement.

How, then, do we move forward? We can start by building on the technology foundation, albeit imperfect, that has come about from the HITECH Act. We must focus on translation, aiming to understand how to diversely implement functionality that is highly supported by the evidence while carrying out further research in areas where the evidence is less clear. As with any clinical intervention, we must pay attention to both beneficial and adverse effects, learning from the growing body of knowledge on safe use of EHRs [23]. We must also train and deploy clinician informatics leaders who provide expertise at the intersection of health care and IT [24].

Finally, we also reflect on the perspective of the larger value of IT in health care settings. Approaches to cost containment, quality measurement, and billing via documentation must be reformulated to leverage the EHR and reduce burden on clinicians. We should focus on issues such as practice and IT system redesign, best practices for the patient-practitioner-computer triad, and practitioner well-being [25]. We must build on value from other uses of EHRs and IT, including patient engagement and support for clinical research. Leadership for these changes must come from leading health care systems, professional associations, academia, and government.

References

1. Kohn LT, Corrigan JM, and Donaldson MS, eds. To Err Is Human: Building a Safer Health System. 2000, National Academies Press: Washington, DC.
2. Classen DC, Resar R, Griffin F, Federico F, Frankel T, Kimmel N, et al., 'Global trigger tool' shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff, 2011. 30: 4581-4589.
3. McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, et al., The quality of health care delivered to adults in the United States. N Engl J Med, 2003. 348: 2635-2645.
4. Levine DM, Linder JA, and Landon BE, The quality of outpatient care delivered to adults in the United States, 2002 to 2013. JAMA Intern Med, 2016. 176: 1778-1790.
5. Anderson GF, Frogner BK, Johns RA, and Reinhardt UE, Health care spending and use of information technology in OECD countries. Health Aff, 2006. 25: 819-831.
6. Squires D and Anderson C, U.S. Health Care from a Global Perspective: Spending, Use of Services, Prices, and Health in 13 Countries. 2015, The Commonwealth Fund: New York, NY, http://www.commonwealthfund.org/publications/issue-briefs/2015/oct/us-health-care-from-a-global-perspective.
7. Smith PC, Araya-Guerra R, Bublitz C, Parnes B, Dickinson LM, VanVorst R, et al., Missing clinical information during primary care visits. JAMA, 2005. 293: 565-571.
8. Adler-Milstein J and Pfeifer E, Information blocking: is it occurring and what policy strategies can address it? Milbank Q, 2017. 95: 117-135.
9. Tierney WM, Miller ME, Overhage JM, and McDonald CJ, Physician inpatient order writing on microcomputer workstations: effects on resource utilization. JAMA, 1993. 269: 379-383.
10. Bates DW, Leape LL, Cullen DJ, Laird N, Petersen LA, Teich JM, et al., Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA, 1998. 280: 1311-1316.
11. Bates DW, Kuperman GJ, Rittenberg E, Teich JM, Fiskio J, Ma'luf N, et al., A randomized trial of a computer-based intervention to reduce utilization of redundant laboratory tests. Am J Med, 1999. 106: 144-150.
12. Teich JM, Merchia PR, Schmiz JL, Kuperman GJ, Spurr CD, and Bates DW, Effects of computerized physician order entry on prescribing practices. Arch Int Med, 2000. 160: 2741-2747.
13. Overhage JM, Tierney WM, Zhou XH, and McDonald CJ, A randomized trial of "corollary orders" to prevent errors of omission. JAMA, 1997. 4: 364-375.
14. Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, et al., Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med, 2006. 144: 742-752.
15. Goldzweig CL, Towfigh A, Maglione M, and Shekelle PG, Costs and benefits of health information technology: new trends from the literature. Health Aff, 2009. 28: w282-w293.
16. Buntin MB, Burke MF, Hoaglin MC, and Blumenthal D, The benefits of health information technology: a review of the recent literature shows predominantly positive results. Health Aff, 2011. 30: 464-471.
17. Jones EB and Furukawa MF, Adoption and use of electronic health records among federally qualified health centers grew substantially during 2010-12. Health Aff, 2014. 33: 1254-1261.
18. Vawdrey DK and Hripcsak G, Publication bias in clinical trials of electronic health records. J Biomed Inform, 2013. 46: 139-141.
19. Kuhn T, Basch P, Barr M, and Yackel T, Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians. Ann Intern Med, 2015. 162: 301-303.
20. Liang LL, Connected for Health - Using Electronic Health Records to Transform Care Delivery. 2010, San Francisco, CA: Jossey-Bass.
21. Maeng DD, Davis DE, Tomcavage J, Graf TR, and Procopio KM, Improving patient experience by transforming primary care: evidence from Geisinger's patient-centered medical homes. Pop Health Manag, 2013. 16: 157-163.
22. Longman P, Best Care Anywhere: Why VA Health Care is Better Than Yours. 2007, Sausalito, CA: Polipoint Press.
23. Sittig DF, Ash JS, and Singh H, The SAFER guides: empowering organizations to improve the safety and effectiveness of electronic health records. Am J Manag Care, 2014. 20: 418-423.
24. Detmer DE and Shortliffe EH, Clinical informatics: prospects for a new medical subspecialty. JAMA, 2014. 311: 2067-2068.
25. Adler-Milstein J, Embi PJ, Middleton B, Sarkar IN, and Smith J, Crossing the health IT chasm: considerations and policy recommendations to overcome current challenges and enable value-based care. J Am Med Inform Assoc, 2017: Epub ahead of print.

Monday, May 1, 2017

Navigating OHSU Informatics Education Programs and Content

Not infrequently, I receive emails asking about or even expressing confusion about the various informatics educational programs and products of Oregon Health & Science University (OHSU). With a couple of grant-funded curriculum development projects about to end, this is probably a good time for a posting here to help sort things out. Before I do that, I must give a plug to US News & World Report, which recently plugged informatics as a graduate health degree that expanded both knowledge and career opportunities.

OHSU has a number of educational programs in biomedical informatics. The core of all these programs is the Biomedical Informatics Graduate Program, which provides masters and PhD degrees in two tracks, health and clinical informatics (HCI) and bioinformatics and computational biomedicine (BCB). The HCI track also offers a Graduate Certificate that is a subset of the masters program, and these two programs available in a distance learning format.

OHSU also offers two fellowship programs. One is a long-standing research-oriented fellowship program for PhD and postdoctoral students funded by the National Library of Medicine. The postdoctoral option also includes a masters degree. More recently, a clinically oriented fellowship for physicians has been launched. This fellowship is accredited by the Accreditation Council for Graduate Medical Education (ACGME) and allows sitting for the clinical informatics subspecialty board exam. The clinical informatics fellowship also provides the Graduate Certificate with an option to pursue the masters degree.

OHSU also was the original participant in the AMIA 10x10 (“ten by ten”) program. The OHSU 10x10 course is a repackaging of the introductory course from the HCI track of the graduate program, and those completing the OHSU 10x10 course can take the optional final exam to receive academic credit from OHSU.

The OHSU biomedical informatics program has also participated in the development of a number of public repositories of educational materials that have been funded by US federal grants. OHSU was funded in the original and subsequent update of the Office of the National Coordinator for Health IT (ONC) curriculum. Development of the original curriculum was stopped when funding ended in 2013, with the archive freely available on the American Medical Informatics Association (AMIA) Web site. The update has been expanded to 24 components, each of which is about a college course in size. It has been under development since 2015 and will be made publicly available on the ONC Web site (HealthIT.gov) this summer.

All of the grantees of the ONC update project have also been required to offer short-term training to 1000 incumbent healthcare professionals. The OHSU offering has focused on healthcare data analytics, and has also provided continuing medical education (CME) for all physicians and Maintenance of Certification (MOC)-II credit for physicians certified in the clinical informatics subspecialty. The free courses offered as part of the ONC grant will be wrapping up at the end of May. We will likely start offering the course again in the future for a fee.

OHSU has also been funded to develop open educational resources (OERs) and data skills courses funded by two grants under the Big Data to Knowledge (BD2K) initiative of the National Institutes of Health (NIH). About 20 modules have been developed for various topics in biomedical science. The materials from this project are currently housed on a Web site that will transition to a permanent archive on GitHub when funding ends for the project later this year.

The long-term maintenance of repository materials is uncertain at this time. We are hopeful that resources to keep them up to date will be found, and OHSU will certainly continue to use them in its own educational programs.

Sunday, April 16, 2017

Participating in the March for Science

I plan to participate in the March for Science in Portland on April 22nd. I did not come to this decision lightly. This was different, for example, from my decision to participate in the Women’s March in January. That was a very easy decision to make based on my political views.

But science is more than just politics to me. It is of course my livelihood, as I am a faculty at Oregon Health & Science University (OHSU) and one whose work is supported by public funds. Science is also, however, the dispassionate pursuit - to the best of human ability - for discerning truth. As such, I do not want to see science subverted for political or other aims. I also want to be careful that others do not subvert the message of the march itself, which in my view is to inform people about the value of public support and taxpayer funding of science.

I am pleased that many organizations have reached similar decisions. The American Association for the Advancement of Science (AAAS) has endorsed the march, and I agree with their statement that the march is a "nonpartisan set of activities that aim to promote science education and the use of scientific evidence to inform policy." I am pleased that OHSU supports the march as well.

In the end, my concerns about the real threats to science outweighed my worries about science being subverted by politics. I consider the threats to science by the current political leadership of the US to be significant. I do not consider science to be a partisan subject. I cannot look at climate change, gun violence, or immunization-preventable diseases and state that research about them is driven by an ideological or political agenda. Yes, science is always full of disagreement and is never truly “settled,” but there are bounds of truth and there is always a need to probe even what we believe to be true further. One of the beauties of science and its dispassionate search for answers is that it is self-correcting. So when science gets something wrong, there is a good likelihood that it will be corrected by further research.

Of course, I also recognize that the public purse to fund science, or anything else that the government funds, is not unlimited. That is why we have a political process to debate and enact appropriate amounts of taxation and public spending. It is also important to remember that basic research funded by the government is not research that would be funded by private industry. In fact, industry well knows it benefits from the basic research that enables companies to develop profitable products.

I also believe that the scientific enterprise in the US is a very efficient and effective allocation of tax dollars. The National Institutes of Health (NIH) budget of about $30 billion is about 1% of overall federal spending. Federal research grants not only fund scientific research but also education and training for the next generations of scientists, clinicians, and others. NIH funding is mostly awarded through highly competitive funding opportunities that often times only have success rates of 10-15%. Despite what detractors says, the life of writing grant proposals is not a cushy one.

Even beyond the research itself, the money spent on scientific research brings money back to communities. When a faculty like myself is awarded a grant, that money not only advances research and education, but it creates jobs for people in the local community. In turn, all of those who are funded by the grant turn around and spend money in grocery stores, restaurants, and other local businesses. And of course the reality is that if my institution and state and local governments were not funded by this money, it would end up in other states. OHSU commissioned a study about five years ago showing a multiplier effect in terms of money spent on the institution having an impact back on the local economy.

Although this march is about science generally, I hope that some other points specific to biomedical research come across. As pointed out by OHSU leadership, an abrupt cut in funding, such as the 18% cut to NIH proposed by the Trump Administration, will have outsized impact due to the fact that most NIH grants are multi-year awards. This means that only a portion of a given year’s funding goes to new projects. An abrupt cut will mean that for the first year of the big cuts, very few new grants would be able to be awarded. Given how competitive the environment for funding already is, we stand to lose the momentum of both established and emerging scientists.

I also hope that another point that comes out is the misguided plan to in essence eliminate the Agency for Healthcare Research & Quality (AHRQ). While it will ostensibly be folded into NIH (AHRQ currently exists within the Department of Health and Human Services but outside NIH), the claim that its research is duplicated by other NIH entities is simply not true. AHRQ performs critical and novel research in under-researched areas of health and healthcare, such as patient safety, healthcare quality, and evidence-based medicine. As with other basic research, industry may benefit and even develop products from this research, the basic research is too far removed from their product cycle for them to want to fund it. As this is not the first time that efforts have been made to de-fund AHRQ, I have written about the value of AHRQ before.

I look forward to participating in the March for Science and advocating for the benefits of scientific research and its funding by the federal government. I hope the outpouring of support will education the downside to neglecting basic scientific research and the importance of training new scientists.

Sunday, February 19, 2017

Big Change, Little Change: OHSU Biomedical Informatics Graduate Program Renames Tracks

The Oregon Health & Science University (OHSU) Biomedical Informatics Graduate Program is renaming the two tracks of its program. While the changes to the names of the tracks are small, they reflect the big changes in the field and evolving content of the curriculum.

Since 2006, the program has had two “tracks,” which have been called Clinical Informatics (CI) and Bioinformatics & Computational Biology (BCB). These two pathways through the program have been called “tracks” because they represent two different foci within the larger field of biomedical informatics, which is the discipline that acquires, organizes, and uses data, information, and knowledge to advance health-related sciences. Historically, the differences between the tracks represented their informatics focus, in particular people, populations, and healthcare (clinical informatics) vs. cellular and molecular biology, genomics, and imaging (bioinformatics).

In recent years, however, these distinctions have blurred as “omics” science has worked its way into clinical medicine. At the same time, health, healthcare, and public health have become much more data-driven, due in no small part to the large-scale adoption of electronic health records. As such, the two tracks have begun to represent different but still distinct foci, mostly in their depth of quantitative methods (deep vs. applied) but also in coverage of other topics (e.g., system implementation, especially in complex health environments; usability; and clinical data quality and standards).

The program believes that both tracks possess a set of common competencies at a high level that reflect the essential knowledge and skills of individuals who work in biomedical informatics. The curriculum organizes these competencies into “domains,” which are groups of required and elective courses that comprise the core curriculum of each track. To reflect the evolution of the program, the program has renamed the BCB track to Bioinformatics and Computational Biomedicine (still abbreviated BCB) and the CI track to Health and Clinical Informatics (now to be abbreviated HCI). The table lists below lists the common competencies and the names of the domains for each track. Each of the domains contains required courses, individual competency courses (where students are required to select a certain number of courses from a larger list, which used to be called “k of n” courses), and elective courses.


The program will continue the overall structure of the curriculum with the “knowledge base” that represents the core curriculum of the master’s degree and the base curriculum for advanced study in the PhD program. A thesis or capstone is added to the knowledge base to qualify for the MS or MBI (latter in the HCI Track only) degrees, respectively. Additional courses are required for the PhD, ultimately culminating in a dissertation.

The materials and Web site for the program will be updated quickly to reflect the new names. The program will also be evolving course content as well as introducing new courses to reflect the foci of the new tracks. The program still fundamentally aims to train future researchers and leaders in the field of biomedical informatics.

Wednesday, February 1, 2017

A New Textbook on Health Systems Science

Many aspects of academic medicine, from the structure of the medical school curriculum to the organization of departments in Schools of Medicine, are neatly segregated into two buckets: basic science and clinical science. In the jargon of medical schools and education, basic science refers to the basic biomedical sciences that have traditionally been taught in the first two years of medical school, such as anatomy, physiology, biochemistry, and pharmacology. While plenty of clinical material has migrated into the first two years of medical school over the years, such as learning to interact professionally with patients and perform a physical examination, the main focus of the first half of medical school has historically been on basic science, culminating in the US Medical Education Licensure Examination (USMLE) Step 1 exam.

Once students finish their basic science years, they move on to the clinical sciences, where they begin rotations, also called clerkships or clinical experiences. They usually first rotate through the core medical specialties, i.e., internal medicine, surgery, pediatrics, obstetrics/gynecology, and psychiatry. This is then followed by rotations in other specialties and subspecialties, ultimately leading to graduation and the start of their residency training.

This division of medical education goes beyond just the medical school curriculum. The organizational structure in most medical schools is to group academic departments into basic science and clinical departments. These two types of departments usually have different funding models. Basic science departments are usually funded by base budgets for teaching and grants for research, with an expectation that just about all faculty have research grant funding. Clinical departments have base budgets and research programs as well, but they perform another activity, which is clinical care that provides practice opportunities (and revenues) for faculty and learning experiences for students, residents, and fellows. In many clinical departments in medical schools, research activity is modest and may be partially subsidized by the margins from clinical revenues.

The focus on these two groups of sciences takes the perspective of the physician taking care of a single patient, i.e., applying the best biomedical science through the lens of a specific clinical specialty. However, despite its primacy, there is more to the practice of medicine than taking care of single patients. Physicians and other clinicians work in a healthcare system that has other concerns, such as continually increasing costs, worries about patient safety, and questions about the quality of care delivered. As such, 21st century clinicians must be competent in more than the diagnosis and treatment of disease in individual patients. This has led to emergence of the notion of a “third science” of medicine, which focuses on how to optimally provide healthcare for patients and populations. While some describe this as “healthcare delivery science” (my preference) or “implementation science,” the emerging name, as given to a textbook in this area, is now “health systems science.”

The textbook is published by the American Medical Association (AMA), which has been supporting innovation in medical education through its Accelerating Change in Education (ACE) consortium, funded by grants to medical schools [1]. OHSU was one of the original grantees in this program to establish “medical schools of the future.” I have been pleased that one outcome of this program has been the expansion of instruction in clinical informatics for medical students, which I consider to be an essential competency for 21st century physicians [2].


The titles of the chapters of the new textbook describe the important topics covered by health systems science:
  1. Health Systems Science in Medical Education
  2. What Is Health Systems Science? Building an Integrated Vision
  3. The Health Care Delivery System
  4. Value in Health Care
  5. Patient Safety
  6. Quality Improvement
  7. Principles of Teamwork and Team Science
  8. Leadership in Health Care
  9. Clinical Informatics
  10. Population Health
  11. Socio-Ecologic Determinants of Health
  12. Health Care Policy and Economics
  13. Application of Foundational Skills to Health Systems Science
  14. The Use of Assessment to Support Learning and Improvement in Health Systems Science
  15. The Future of Health Systems Science
I am delighted myself to be the lead author of one of the chapters, not surprisingly the one on clinical informatics [3]. I hope this chapter will introduce many new generations of medical and other health professions students to the informatics field and its role in healthcare delivery. Of course, informatics plays many roles beyond healthcare delivery, such as informing the care of individual patients and facilitating all types of research, but the effective use of data and informatics is a key aspect of health systems science.

I hope that this new textbook will lead the way in emphasizing the importance of health systems science in the work of physicians and other healthcare professionals. Clinicians have long known that diagnosing and treating disease, while the centerpiece of medical practice, cannot be carried out in a vacuum outside the realm of the patient’s and larger health system’s context. The care delivered to those individual patients will be better if the clinician has the perspective of that larger system.

References

1. Skochelak, SE, Hawkins, RE, et al., Eds. (2017). Health Systems Science. New York, NY, Elsevier.
2. Hersh, WR, Gorman, PN, et al. (2014). Beyond information retrieval and EHR use: competencies in clinical informatics for medical education. Advances in Medical Education and Practice. 5: 205-212. http://www.dovepress.com/beyond-information-retrieval-and-electronic-health-record-use-competen-peer-reviewed-article-AMEP.
3. Hersh, W and Ehrenfeld, J (2017). Clinical Informatics. in Health Systems Science. S. Skochelak, R. Hawkins, L. Lawson et al. New York, NY, Elsevier: 105-116.

Sunday, January 22, 2017

Response to Request for Information (RFI): Strategic Plan for the National Library of Medicine, National Institutes of Health

Under the leadership of its new Director, Patricia Brennan, PhD, RN, the National Library of Medicine (NLM) is undertaking a strategic planning process to develop goals and priorities for the NLM going forward. This process builds on a Request for Information (RFI) in 2015 from the NLM Working Group of the Advisory Committee to the National Institutes of Health (NIH) Director (ACD) to obtain input for a report on a vision for the future of NLM in the context of NLM’s leadership transition and emerging NIH data science priorities. The report was released in 2015. I posted to this blog both the comments that I submitted for the report as well as an overview of the report after it was published.

The new RFI asks for comments on four themes:
  1. Role of NLM in advancing data science, open science, and biomedical informatics
  2. Role of NLM in advancing biomedical discovery and translational science
  3. Role of NLM in supporting the public’s health: clinical systems, public health systems and services, and personal health
  4. Role of NLM in building collections to support discovery and health in the 21st century
For each theme, respondents are asked to:
  1. Identify what you consider an audacious goal in this area – a challenge that may be daunting but would represent a huge leap forward were it to be achieved
  2. The most important thing NLM does in this area, from your perspective
  3. Research areas that are most critical for NLM to conduct or support
  4. Other comments, suggestions, or considerations, keeping in mind that the aim is to build the NLM of the future
In the remainder of this post, I will provide the comments I submitted to the RFI. I chose to limit my comments to the first of the four themes because the role of NLM is to advance the other themes – discovery, translation, and the public’s health – in the context of the first theme – namely the field of biomedical informatics, and data/open science within it.

a. Identify what you consider an audacious goal in this area – a challenge that may be daunting but would represent a huge leap forward were it to be achieved. Include input on the barriers to and benefits of achieving the goal.

I have chosen to focus my comments on the first of the four themes because the role of NLM is to advance the other themes – discovery, translation, and the public’s health – by advancing the first theme – namely the field of biomedical informatics, and data/open science within it. Therefore, the most audacious goal for all of NLM is to build and sustain the infrastructure of biomedical informatics, i.e., the people, technology, and resources to advance discovery, translation, and the public’s health.

Biomedical informatics must leverage both achievements that are new, such as digital and networking technologies, as well as goals that are enduring, such as improving individual health, healthcare, public health, and research. The NLM must promote, educate about, and fund biomedical informatics and related disciplines to the appropriate level they deserve in relation to the larger biomedical research enterprise. While research in domain-specific areas (e.g., cancer, cardiovascular, mental health) is important, biomedical informatics can provide fundamental tools to advance science in all domain-specific areas. To achieve this, we still need basic research in biomedical informatics itself, improving our knowledge and tools in many areas, including but not limited to human-computer interaction, natural language understanding, standards and interoperability, data quality, the intersection of people and organizational issues with information technology, workflow analysis, etc.

b. The most important thing NLM does in this area, from your perspective.

Although there are many institutes within NIH (e.g., NCI, NHLBI, and the Fogarty International Center) and other entities outside of NIH (e.g., AHRQ and PCORI) that fund research in informatics-related areas, NLM is the only entity that funds basic research in biomedical informatics. Most of the other institutes and entities that fund informatics support projects that are highly applied and/or domain-focused. These projects are important, but basic informatics research is also key to improving discovery, translation, and the public’s health.

The NLM is also unique in developing emerging technologies, some of which we cannot foresee now. When I was an NLM informatics postdoctoral fellow in the late 1980s, I could not have imagined the emergence of the World Wide Web, the wireless ubiquitous Internet, modern mobile devices, or the widespread adoption of electronic health records that we now have. There are likely new technologies coming down the road that few if any of us can predict that will have major impacts on health and healthcare. It is critical that the NLM and the research it supports enable these technologies to be put to optimal usage.

c. Research areas that are most critical for NLM to conduct or support.

Although it is critical for NLM to support research in biomedical informatics as applied to all areas of individual and public health and of healthcare and research, it is nearly unique in funding basic research in clinical informatics. A good deal of informatics research in the other NIH institutes is focused in basic science, e.g., genomics, bioinformatics, and computational biology. AHRQ and PCORI support clinical informatics research, but it is highly applied. Only NLM funds critical basic research in clinical informatics, and this function is vitally important as we strive to use informatics to achieve the triple aim of better health, improved healthcare, and reduced costs.

d. Other comments, suggestions, or considerations, keeping in mind that the aim is to build the NLM of the future.

Another critical function of NLM that has provided value and should be further augmented is its training programs for those who aspire to careers in informatics research. I count myself among many whose NLM fellowship training led to a successful career as a researcher, educator, and academician generally. NLM training grants have also provided support for my university to educate the next generation of informatics researchers who have gone on to become successful researchers and other leaders in the field.

A final problem is that I would like to see addressed is the name itself, "National Library of Medicine." This name does not connote all of what NLM does. Yes the NLM is a world-renowned biomedical library, and that function is critically important to continue. But NLM also provides cutting-edge research and training in informatics, and an ideal change for NLM would be a name change to something like the "National Biomedical and Health Informatics Institute," of which a robust and innovative National Library of Medicine would be a vital part.

I look forward to seeing other input to the new NLM strategic planning process and the resulting strategic plan that will set priorities going forward for this great public resource that has benefitted patients, the healthcare system, and students, faculty, and others who have worked in biomedical informatics to advance human health.

Friday, January 20, 2017

What is the Value of Those Who Create and Disseminate Knowledge?

There is an old adage, “Those who can’t do, teach.” (And Woody Allen’s further, “Those who can’t teach, teach gym.”) My usual retort is a quote from Aristotle, "Those that know, do. Those that understand, teach.”

But we seem to be entering an era where an individual’s worth is related mostly to his or her wealth. In addition, there are plenty of people, many of same mind-set, who are highly critical of academia, in particular of people whose livelihood involves creating and/or disseminating knowledge.

I am not uncritical of some aspects of the academic world in which I work, but I am even more aghast toward those who believe it to be misguided or unnecessary.

In essence, my job involves the creation and dissemination of knowledge. This takes a certain skill set and collection of talents, just like any other knowledge-oriented job. I believe that this work is important to society and worthy of its investment, even though the lion’s share of the funding of my teaching work comes from learners who pay tuition.

My job is hardly stress-free. Academia is like most pursuits in life, where a certain amount of stress and competition is good, leading to productivity and innovation. And there are times when the stress and pursuit become counterproductive.

I owe a lot to subsidized public academia that has enabled my professional success in life. I attended public schools for my entire education, from kindergarten through medical school. When I started college at the University of Illinois in 1976, tuition was $293 per semester. Not per course or per credit, but for all of the courses I took that term. Even medical school, also at University of Illinois, was relatively inexpensive for me, with tuition around $3000 per year when I started in 1980. I am not against students have some “skin in the game” in higher education, but it must be within the means of anyone who wants to pursue it. By the same token, I believe that we in academia need to be accountable in providing a skill set that enables individuals to succeed in their chosen careers.

I am extremely gratified to have an academic job that I mostly enjoy going to each day. While most higher education faculty positions have a combination of research, teaching, and service, I have found my most passion in teaching. I particularly enjoy, and have received feedback from others, that I have a knack for taking bodies of knowledge and distilling out the big themes and most salient facts. I do also enjoy research and building on the synergy of the two that characterizes optimal higher education. I make a good salary as a department chair at a public medical school. I could certainly make more money in other pursuits, but I have had plenty to live comfortably, save for retirement, send my children to college, and handle unexpected expenses.

I don’t begrudge rich people their wealth, especially those who earned it from modest beginnings and/or by producing things that truly benefit society. But wealth is hardly the only measure of a person’s contributions and value to society, and there must always be a role for those who create and disseminate knowledge.

Thursday, January 12, 2017

What is the Right Approach to Sharing Clinical Research Data?

While many people and organizations have long called for data from randomized clinical trials (RCTs) and other clinical research to be shared with other researchers for re-analysis and other re-use, the impetus for it accelerated about a year ago with two publications. One was a call by the International Committee of Medical Journal Editors (ICMJE) for de-identified data from RCTs to be shared as condition of publication [1]. The other was the publication of an editorial in the New England Journal of Medicine wondering whether those who do secondary analysis of such data were “research parasites” [2]. The latter set off a fury of debate across the spectrum, e.g. [3], from those who argued that primary researchers labored hard to devise experiments and collect their data, thus having claim to control over it, to those who argued that since most research is government-funded, the taxpayers deserve to have access to that data. (Some of those in the latter group proudly adopted the “research parasite” tag.)

Many groups and initiatives have advocated for the potential value of wider re-use of data from clinical research. The cancer genomics community has long seen the value of a data commons to facilitate sharing among researchers [4]. Recent US federal research initiatives, such as the Precision Medicine Initiative [5] and the 21st Century Cures program [6] envision an important role for large repositories of data to accompany patients in cutting-edge research. There are a number of large-scale efforts in clinical data collection that are beginning to accumulate substantial amounts of data, such as the National Patient-Centered Clinical Research Network (PCORNet) and the Observational Health Data Sciences and Informatics (OHDSI) initiative.

As with many contentious debates, there are valid points on both sides. The case for requiring publication of data is strong. As most research is taxpayer-funded, it only seems fair that those who paid are entitled to all the data for which they paid. Likewise, all of the subjects were real people who potentially took risks to participate in the research, and their data should be used for discovery of knowledge to the fullest extent possible. And finally, new discoveries may emerge from re-analysis of data. This was actually the case that prompted the Longo “ esearch parasites” editorial, which was praising the “right way” to do secondary analysis, including working with the original researchers. The paper that the editorial described had discovered that the lack of expression of a gene (CDX2) was associated with benefit from adjuvant chemotherapy [7].

Some researchers, however, are pushing back. They argue that those who carry out the work of designing, implementing, and evaluating experiments certainly have some exclusive rights to the data generated by their work. Some also question whether the cost is a good expenditure of limited research dollars, especially since the demand for such data sets may be modest and the benefit is not clear. One group of 282 researchers in 33 countries, the International Consortium of Investigators for Fairness in Trial Data Sharing, notes that there are risks, such as misleading or inaccurate analyses as well as efforts aimed at discrediting or undermining the original research [8]. They also express concern about the costs, given that there are over 27,000 RCTs performed each year. As such, this group calls for an embargo on reuse of data for two years plus another half-year for each year of the length of the RCT. Even those who support data sharing point out the requirement for proper curation, wide availability to all researchers, and appropriate credit to and involvement of those who originally obtained the data [9].

There are a number of challenges to more widespread dissemination of RCT data for re-use. A number of pharmaceutical companies have begun making such data available over the last few years. Their experience has shown that the costs are not insignificant (estimated to be about $30,000-$50,000 per RCT) and a scientific review process is essential [10]. Another analysis found that the time to re-analyze data sets can be long, and so far the number of publications have been few [11]. An additional study found that identifiable data sets were only explicitly visible from 12% of all clinical research funded by the National Institutes of Health in 2011 [12]. This means that from 2011 alone, there are possibly more than 200,000 data sets that could be made publicly available, indicating some type of prioritization might be required.

There are also a number of informatics-related issues to be addressed. These not only include adherence to standards and interoperability [13], but also attention to workflows, integration with other data, such as that from electronic health records (EHRs), and consumer/patient engagement [14]. Clearly the trialists who generate the data must be given incentives for their data to be re-used [15]. My own work assessing the caveats of re-using EHR data is somewhat applicable here too, in that even RCT data may not have the breadth of data or cover sufficient periods of time for additional analyses [16].

There is definitely great potential for re-use of RCT and other clinical research data to advanced research and ultimately health and clinical care for the population. However, it must be done in ways that represent an appropriate use of resources and result in data that truly advances research, clinical care, and ultimately individual health.

References
1. Taichman, DB, Backus, J, et al. (2016). Sharing clinical trial data: a proposal from the International Committee of Medical Journal Editors. New England Journal of Medicine. 374: 384-386.
2. Longo, DL and Drazen, JM (2016). Data sharing. New England Journal of Medicine. 374: 276-277.
3. Berger, B, Gaasterland, T, et al. (2016). ISCB’s initial reaction to The New England Journal of Medicine Editorial on data sharing. PLoS Computational Biology. 12(3): e1004816. http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004816.
4. Grossman, RL, Heath, AP, et al. (2016). Toward a shared vision for cancer genomic data. New England Journal of Medicine. 379: 1109-1112.
5. Collins, FS and Varmus, H (2015). A new initiative on precision medicine. New England Journal of Medicine. 372: 793-795.
6. Kesselheim, AS and Avorn, J (2017). New "21st Century Cures" legislation: speed and ease vs science. Journal of the American Medical Association. Epub ahead of print.
7. Dalerba, P, Sahoo, D, et al. (2016). CDX2 as a prognostic biomarker in stage II and stage III colon cancer. New England Journal of Medicine. 374: 211-222.
8. Anonymous (2016). Toward fairness in data sharing. New England Journal of Medicine. 375: 405-407.
9. Merson, L, Gaye, O, et al. (2016). Avoiding data dumpsters — toward equitable and useful data sharing. New England Journal of Medicine. 374: 2414-2415.
10. Rockhold, F, Nisen, P, et al. (2016). Data sharing at a crossroads. New England Journal of Medicine. 375: 1115-1117.
11. Strom, BL, Buyse, ME, et al. (2016). Data sharing — is the juice worth the squeeze? New England Journal of Medicine. 375: 1608-1609.
12. Read, KB, Sheehan, JR, et al. (2015). Sizing the problem of improving discovery and access to NIH-funded data: a preliminary study. PLoS ONE. 10(7): e0132735. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0132735.
13. Kush, R and Goldman, M (2016). Fostering responsible data sharing through standards. New England Journal of Medicine. 370: 2163-2165.
14. Tenenbaum, JD, Avillach, P, et al. (2016). An informatics research agenda to support precision medicine: seven key areas. Journal of the American Medical Informatics Association. 23: 791-795.
15. Lo, B and DeMets, DL (2016). Incentives for clinical trialists to share data. New England Journal of Medicine. 375: 1112-1115.
16. Hersh, WR, Weiner, MG, et al. (2013). Caveats for the use of operational electronic health record data in comparative effectiveness research. Medical Care. 51(Suppl 3): S30-S37.

Friday, December 30, 2016

A Different Annual Reflection For This Past Year

Every year since the inception of this blog, my last posting of the year has been a reflection looking back over the year that is ending. This year’s reflection marks the completion of eight years of this blog, and writing this year’s posting feels different. This is no doubt because this blog has been very much tied into the key events of informatics over the last decade, in particular the Health Information Technology for Economic and Clinical Health (HITECH) Act and other actions emanating from the Presidency of Barack Obama. This has been a period of activist government with respect to our field, and now the US electorate (at least according to the rules of the Electoral College) has chosen a different path going forward.

Fortunately, the need for informatics is not going away. Even if the Affordable Care Act is repealed, the underlying problems in healthcare that led to its passage are still a challenge. Healthcare in the US is still the most fragmented, expensive, and inefficient of any country in the world. This does not mean I would want to get seriously ill anywhere else in the world, but I still believe there is also an ethical imperative to provide basic healthcare to all citizens in the least costly manner. Medicine is supposed to be a calling for physicians, and not just a job. Although I no longer care for patients directly, I view my work as a physician-informatician to support the delivery of more universal and efficient care by supporting the data, information, and knowledge needs of healthcare delivery and patients.

Informatics also supports other aspects of health that will also continue to be important even if reform of the US healthcare delivery system takes different directions. Informatics should support the health of the population through public health. It can support expansion of our knowledge and best practices by enhancing basic, clinical, and translational research. It can extend the reach of healthcare through telehealth and telemedicine. And because the US is still a prosperous nation to whom many still look for leadership, we can share our knowledge and tools for better health and healthcare with our fellow planetary citizens around the world, especially clinical and informatics professionals.

As for the blog itself, it continues to thrive. I am always gratified when people tell me they find it a valuable source of information, especially for key topics in the application of informatics as well as for issues for people seeking to start or advance careers in the field. The number of page views continues to increase, and in this last month, the total barreled through the 400,000 mark for the (including this) 267 posts I have made over the eight years. I have no plans to change anything with my approach to the blog any time soon.

There is no question that for people who work in academia, in research, and in health IT that there is uncertainty as to the future. Nonetheless, I am grateful that I have a loving family, wonderful colleagues, and a great many other friends who bring happiness and stability to my life.

Wednesday, December 28, 2016

Benchmarks to Assess the New President

According to the rules of US elections, Donald Trump won the Presidency and Republicans control the Senate and House of Representatives. I respect that.

This does not mean, however, that Trump and his party have any sort of mandate. Not only did Trump lose the popular vote by about 2.8 million votes (48.2%-46.2%), but he won three large states (Pennsylvania, Michigan, and Wisconsin) that would have swung the election to Hillary Clinton with a combined total of only 77,000 votes. And while this year there was a narrow overall majority of votes for House Republican candidates this year, in recent years there has been a majority of votes for Democratic candidates despite a hefty majority of seats filled by Republicans, owing to gerrymandering. There were also more popular votes for Democratic Senate candidates this year, although it was somewhat anomalous due to the California Senate race being a run-off between two Democratic candidates.

There is no question that this election was tilted to Mr. Trump by the growing number of mostly white working class people who have been left behind by both economic and social changes in our society. His election was also aided by an unknown amount from Russian hacking, fake news, and the questionable decision by the FBI to raise its investigation of the email issue in the last weeks of the campaign. This was a candidate who set new records for fact-checkers disputing his statements and had a large following of people who believed those falsehoods.

As such, the outcome of this election is anything but a mandate for Donald Trump. Yes, he did obtain more electoral votes than Hillary Clinton, but his victory was extremely narrow, and he and other Republicans need to be careful of overreach. This is especially true since it is not clear that Mr. Trump really stands for the kind of people and their views that he is installing in his political leadership. (It is often not clear what he stands for at all, since his governing philosophy is not very detailed or consistent.)

But the new Republican majority may find it harder to improve upon the economic situation than what they have been handed. The US economy certainly still has a number of problems, especially income inequality and technology that is changing the nature of work, especially manual work. By most measures, however, the US economy is actually doing well. We finish the year, and President Obama’s second term, with strong economic growth (Gross Domestic Product [GDP] up at a 3.5% annual rate last quarter and being positive most of his second term), low unemployment (currently 4.6%, nearly full employment), low inflation, and a booming stock market (Dow Jones Industrial Average closing in on 20,000). Gas prices are low and the proportion of people lacking health insurance is lower than it has been in decades.

I believe an important task is to hold President Trump accountable. We will want to see how he adheres to his conflicting campaign pledges and the results of those policies when they are implemented. This includes promises to massively slash taxes, increase defense and infrastructure spending, make no cuts to Medicare or Social Security, build a border wall and deport 11 million people, renegotiate trade deals and implement tariffs if necessary, and come up with "something better" as the Affordable Care Act is repealed. While I disagree with many of these actions, it will be important to see whether Mr. Trump carries them out, and if he does, what is their impact.

Even though a good deal of what Mr. Trump says bothers many of us, I believe it will be more important to look at his actions. I hope he will especially be held accountable by those who are not conservative ideologues, such as workers who have been displaced from coal-mining and manufacturing jobs and those who don’t believe that their new health insurance they have received through the Affordable Care Act will be taken away. I also hope the impact of his policies on the environment, including climate change, will be objectively measured. And, of course, an objective assessment of a foreign policy administered via Twitter.

While I believe Mr. Trump should be judged more for his actions and their outcomes, I don't think he should be let off the hook for his words either. This includes all the vitriol he spread through the years of President Obama, from stoking the fires of the birther movement to making false statements on the economy. Despite attempts to "unify" the electorate after a divisive election, we cannot forget Mr. Trump's insults and lies about individual people and of groups, from women to Muslims to Mexicans. I still shake my head in amazement when people are asked to not take everything Trump said during the campaign literally, that it is legitimate to enter some sort of "post-truth" era., or that a good proportion of his

In the end, a President is not responsible for everything that happens on his or her watch. But for a narcissistic individual who takes credit for things that go right, even when that credit is not deserved, we should also hold him or her to objective measures of performance as well. While Mr. Trump has mastered the neutering of the press through social media and other means, I hope that responsible journalism will rise to the task and objectively report the impact of the words and policies that emanate from his Presidency and his political party.

Thursday, December 8, 2016

Coping With Adversarial Information Retrieval in Modern Times

When I first chose my area of research focus in my postdoctoral fellowship in biomedical informatics in the late 1980s, I was intrigued by information retrieval (IR; also known as search). While most in informatics were still focused on artificial intelligence and expert systems, I was fascinated by the notion that computers could provide information in response to users entering text. At that time, of course, there were only modest amounts of information to retrieve. The main source was bibliographic databases such as MEDLINE. While the full text of journals and even some textbooks was starting to become available, it was mostly text and not figures or images.

The world of search started to change with the advent of the World Wide Web in the early 1990s. I had actually been skeptical that the Web could even deliver more than text in real-time, given how slow the Internet was at that time. This was also a time when my colleagues at Oregon Health & Science University (OHSU) started putting on continuing medical education (CME) courses for physicians about the growing amount of information available (including via CD-ROM drives). But when we taught about searching the Web, we presented many caveats, especially because there was no control over the quality of information [1].

A related happening about this same time was the growth of spam email [2]. In the 1980s and even into the early 1990s, the only real users of Internet email were academics and techies. But as the Web and underlying Internet spread to broader populations, so did spam email, especially because it was so easy to reach massive numbers of people.

These developments all gave rise to the notion of “adversarial” IR, something that was initially difficult to fathom when we were trying to develop the most effective methods to provide access to the highest quality information available [3]. But as content emerged that we hoped users would not retrieve, there started an additional focus in IR that considered ways to avoid providing users the worst information.

One advance that improved the ability of Web searching to retrieve high-quality material was Google and its PageRank algorithm. A major change pioneered by Google was to rank results based not on measures of similarity between words in the query and page, at the time considered to be our best approach, but instead by how many other pages pointed to them. While not perfect, the number of links to a page is indeed associated with its quality, e.g.,, more pages will point to those from the National Library of Medicine or Mayo Clinic than a less credible site.

Of course, this situation resulted in a number of other consequences, not the least of which was the emergence of search engine optimization (SEO), enabling people to fight against PageRank and related algorithms [5]. It also set off a tit-for-tat battle of search engine sites hiring armies of engineers to figure out how people were trying to game their systems [6]. In more recent years, the emergence of new information streams, most notably the Facebook newsfeed, has provided new opportunities and led to the proliferation of “fake news” attributed to impacting the recent US president election [7].

While technology will play some role in solving the adversarial IR problem, it will not succeed by itself. Clever programmers and others will likely always find ways to exploit approaches to limiting the spread of false or incorrect information. The sheer volume of such information makes human intervention an unlikely solution, and of course one person’s high quality information is another person’s trash heap.

The main way to solve the problem, however, is through education. It is all part of basic modern information literacy everyone must have in the 21st century. Just as I have argued that statistics should be a topic taught in high school if not earlier, so should modern information literacy, including related to health. While there will always be shades of gray in terms of information quality, people can and should be taught how to recognize that which is flagrantly false.

I hope we will learn from fake news, newer variants of spam email such as phishing, and other risks of the Internet era that we must train society to better understand our new information ecosystem, and how we can benefit from its value while minimizing its risk.

References

1. Hersh, WR, Gorman, PN, et al. (1998). Applicability and quality of information for answering clinical questions on the Web. Journal of the American Medical Association. 280: 1307-1308.
2. Goodman, J, Cormack, GV, et al. (2007). Spam and the ongoing battle for the inbox. Communications of the ACM. 50(2): 25-33.
3. Castillo, C and Davison, BD (2011). Adversarial Web Search. Delft, Netherlands, now Publishers.
4. Brin, S and Page, L (1998). The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems. 30: 107-117. http://infolab.stanford.edu/pub/papers/google.pdf.
5. Anonymous (2015). The Beginner's Guide to SEO. Seattle, WA, SEOmoz. http://moz.com/beginners-guide-to-seo.
6. Singhal, A (2004). Challenges in Running a Commercial Web Search Engine. Mountain View, CA, Google. http://www.research.ibm.com/haifa/Workshops/searchandcollaboration2004/papers/haifa.pdf.
7. Davis, W (2016). Fake Or Real? How To Self-Check The News And Get The Facts. Washington, DC, National Public Radio. http://www.npr.org/sections/alltechconsidered/2016/12/05/503581220/fake-or-real-how-to-self-check-the-news-and-get-the-facts.