Maintaining Order in the Midst of Chaos

HP Lovecraft's Azathoth at the center of Ultimate Chaos.  "Azathoth". Licensed under CC BY-SA 3.0 via Wikimedia Commons.
HP Lovecraft’s Azathoth at the Center of Ultimate Chaos. “Azathoth“. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

There are few jobs more chaotic than that of physician, at least based on my own experience. Yes there is a schedule of sorts: hospital rounds, procedures, office patients. Unfortunately things rarely go as planned. There is a particularly sick patient on rounds who needs a temporary pacing wire placed. There are more consults than expected. The procedure that was planned to take up to 2 hours takes 4 hours because of unexpected difficulties. Office patients are double booked. And then there are the phone calls. Referring doctors wanting advice or asking if a particularly tough patient can be seen quickly in the office. Nurses calling to clarify orders or to tell about a patient who isn’t doing well. Calls from Medicare or insurance company minions asking why a particular patient was still in the hospital and hadn’t been discharged yet. Other non-patient care related duties take up precious time. There are hospital staff requirements to take infantile online courses on Hazmat or Fire Safety. There are recurring CME (continuing medical education) and new MOC (maintenance of certification) requirements. Finally, believe it or not, doctors usually have a family life too. There have the same school concerts, hockey games, and sick kids that other working parents deal with.

As there are only 24 hours in a day, the net effect of all this running around was that I was perennially late for everything: late in the office, late for procedures, late, late, late.  I myself hate going to an appointment and waiting.  Most of my patients were understanding and good-natured about it, which only made me feel more guilty about being late.  But there didn’t seem to be much that I could do about it.

When things got really busy, interruptions would themselves have interruptions. For example, while writing my patient documentation in the office on my computer, my medical assistant would come in to talk to me about a different patient. While talking to her, a phone call would come in. I would take that call, then go back to the conversation with my MA, then finally back to the patient documentation — at least in theory, assuming I hadn’t forgotten where I was. This interruption process was so common that I began to analyze it — being the geek that I am — in computer terms. Computers also have “interrupts.”  A computer will be processing some task, say sorting a list, when you press a key on the keyboard. This generates an interrupt.  The current state of the task is stopped and pushed onto a certain area of memory called the “stack.” The keystroke is then processed, after which the original task is “popped” off the stack and resumed. Interrupts can also have interrupts with the result that multiple tasks are pushed onto the stack in backwards order (last in — first out).  It works for computers, but unfortunately human memory is fallible, so despite my analysis of the situation, I still often lost track of what I was doing when interrupted multiple times.  Utter Chaos!

Organization is the antithesis of Chaos. Like many people overwhelmed by disorder, I read a lot about the principles of organization. One book that I read in 2008 and that I highly recommend is David Allen’s Getting Things Done (GTD). Even if you don’t implement his entire system of organization, which is actually fairly complex, it would be hard not to come away from this book without some useful tips. A fundamental idea of GTD is to write things down. The whole GTD system is centered on having a “trusted system” to enter tasks so you don’t have to remember them yourself. This trusted system could be a notebook, index cards, scraps of paper, or, more high-tech, computer programs or apps designed to record notes. By writing everything down you can spend time actually doing tasks rather than worrying about what you are forgetting to do.

There is a lot more to the GTD system than just this and I encourage you to read the book. But even if you don’t implement the whole system, just getting things written down is a mind-lightening experience, almost zen-like. In the context of working as a doctor, I used a decidedly low-tech approach to implement a trusted system. I would have a piece of paper with me all day long — usually my hospital rounding list. I would use this to check off the patients I rounded on, adding diagnoses and billing level codes in tiny print. I would write down new patients on the list, including new consults and admissions, as well as patients I received calls about. I would write down little todo tasks, such as, check a troponin level or electrocardiogram, adding a little box that I could check when I completed the task. I could even handle nested interruptions with the list, jotting down a brief note about what I was doing at the time of each interruption so that I could resume where I left off. At the end of the day everything on the list should have been checked or crossed off, and I could discard it. Obviously my todo list often grew beyond one sheet of paper, in which case I would staple a blank one to the original. I realize that many physicians use such a system anyway, and this system is only in the most sketchy sense an implementation of the GTD system. Yet it upholds the spirit of the GTD system, which is to write your tasks down, with frequent reviews and updates.

Since I retired, I have had fewer tasks to organize and more time to develop more elaborate methods of organization. In the hectic world of medicine, nothing was faster or more effective than just writing things down with pen and paper. Nowadays, I gravitate more towards digital forms of organization. I don’t have just one program or app that I use for this. For ephemeral unimportant lists (like a shopping list) I like simple list making apps, such as Wunderlist. For entering notes or clipping webpages, I find Evernote is useful. As mentioned above, I am a longstanding computer geek and programmer. Ultimately the best organizational tool I have found is something called Org Mode which runs in the old-fashioned programmers text editor Emacs (I use that editor to write almost everything, including these posts). Unfortunately Emacs has a very steep learning curve, so I can’t recommend it (unless you too want to write computer programs). There are many other apps and tools to chose form nowadays to implement any organizational system imaginable.  So there are no excuses.  Life today is very complex and chaotic.  Everyone should work out their own organizational system and use it. With such a system, even in the field of medicine, order can come out of chaos!

Let the Sunshine In

sunshine_actYesterday I received an email from Medtronic. It was an early release version of the Sunshine Act data that they had sent to the government. The Sunshine Act, passed in 2010 but implemented in 2013, demands the collection and publication of data on payments to physicians in the form of food, travel, or other goods. This data will be made available online to the public in September of this year. Medtronic, a major manufacturer of pacemakers and implantable defibrillators, was nice enough to release to me the data on the money that they had spent on me. Below is a copy of the report.

Mann_III__David_E LN4CVKS

Perhaps a foreshadowing of what could go wrong with such a database, there is a $90.38 charge to Nancy’s Haute Affairs (I looked it up.  It is a restaurant, not an escort service) in Pensacola, Florida.  But I’ve never been to Pensacola in my life. So this charge is wrong. The rest of it appears to be lunch and breakfast stuff the Medtronic reps thoughtfully provided to our office, or to the break room at the various cath labs I worked at. In looking at these charges, it is important to remember that I did not ask Medtronic to go out and order some Panera. Medtronic brought it in, and once it’s there it is difficult to resist snatching a bagel, even though each bagel snatched results in another database entry under the Sunshine Act.  Given the natural tendency of hungry doctors (I almost never ate lunch at work) to partake of “free” food lying around, I think the more interesting question is not how much did each doctor consume, but rather, how much did a company such as Medtronic spend on doctors in an attempt to curry favor. I’m not sure if the Sunshine Act involves publication of that data, but it should.

There is no doubt that drug and device companies do try to target doctors in order to increase sales of their products, and money going from these companies to doctors is the major means of influence.  Doctors sign up to be on speaking panels for drug or device companies, even though they are not particularly expert on the specific drugs or devices they are talking about.  They receive a professional set of slides from the company and a nice stipend.  Other docs, particularly those in academic medicine, serve on advisory boards for companies, again resulting in a nice stipend as well as travel and lodging to exotic parts of the world.  Although there is a difference between the appearance of a conflict of interest and an actual conflict of interest, perhaps this distinction will appear a bit too fine when the actual dollar amounts these doctors receive are published.  The question becomes: how much money received is too much?  Any amount at all? More than $1000? More than $10,000? Arguments on what’s reasonable aside, there’s no doubt that some doctors are susceptible to this kind of influence, and others will just take the bagel and ignore where it came from.

When I was practicing as an electrophysiologist I felt I was in the latter category. The last thing I thought about when deciding what kind of device to put in was where my last bagel came from. My colleagues in electrophysiology I feel were similarly immune to this kind of influence. I can’t say the same about all my referring physicians.  Most didn’t care what brand device I put in, but there were some exceptions.  Some would call me with a referral of a patient who needed a pacemaker or implantable defibrillator and at the end of the presentation would close by saying: “and please put in a [insert specific brand name here] device.” Yes the device companies wine and dine the referring physicians who don’t actually put in the device, in order for them to pressure the implanting physician to use their specific devices. Admittedly some of these non-implanting referring physicians do device follow-up in their office, which usually involves a rep from the company actually coming to the office and doing the device interrogations (see my earlier post on this topic). These referring doctors will say they have a preference to follow a certain brand of device, which usually means they get along well with the particular rep from that company who comes to their office and does their work for them.  I always found this practice particularly annoying. I as the implanting physician should decide what device to put in, based on my judgment on what’s the best device for the patient. I’m sure the referring physician would not like it if I told him or her what brand stent to put in my patient.

So it would be naive to deny that there is any influence peddling going on between drug and device companies and physicians. Sure it probably pales in comparison to what goes on between lobbyists and politicians in Washington, but don’t hold your breath for Congress to shine the sunlight on their own activities. And based on the preliminary report I received, I’m sure there are going to be a lot of unhappy physicians when the final reports are released to the public in September.

 

A Stroll Down (Random Access) Memory Lane

Ye Olde Computer.  GE-635 at Kiewit Computing Center, Dartmouth College, circa 1969.
Ye Olde Computer. GE-635 at Kiewit Computing Center, Dartmouth College, circa 1969.

My lifetime has spanned many of the important developments in the Age of Computers. Back in 1969 when I entered college, I was a frequent visitor to the Kiewit Computing Center, the lair of a GE-635 computer that filled several rooms. Students had access to the computer via noisy teletypes and a multiuser operating system known as Dartmouth Time Sharing. We wrote simple programs in BASIC, a language created by two of the Dartmouth professors, John Kemeny and Tom Kurtz.  In 1969 even the hoary old operating system Unix was still a year or two in the future. There have been huge changes in computers since then. The smart phone I carry in my pocket today is light-years more powerful than that huge old-time computer.  It has been an interesting journey from those distant days to the present.

With the 1980s came the personal computer. Microcomputers they were called then, to distinguish them from the previous generation of minicomputers (which were about the size of a refrigerator). The Apple II was a breakthrough system, followed by the more business oriented IBM PC. There were other systems from various companies, some of which don’t exist anymore. Many of the systems were incompatible with each other, so special versions of software were required for each system. Microsoft’s MS-DOS, a variant of another disk operating system called CP/M, won the operating system battle, and eventually all PCs were pretty much interchangeable, running MS-DOS. Apple was the outlier, hanging on to a small market share after abandoning the Apple II and Steve Jobs. The Macintosh, incorporating a graphical user interface (GUI) that was ahead of its time, was the inspiration for Microsoft Windows 95, and through the 90s the GUI became dominant. This was also the era of the rise of the Internet and the Dotcoms. Microsoft put Internet Explorer in Windows, making it difficult to install other browsers, leading to Internet browser pioneer Netscape going out of business and anti-trust suits against Microsoft. Desktop PCs were dominant. Laptops were fairly primitive and clunky. Microsoft was at the height of its hegemony.

Then along came the millenium, and with the iPod, Apple, now back under the direction of Jobs, made a complete turnaround. Since then we have seen a revolution in computing with the introduction of mobile computing: smartphones and tablets. This is disruptive technology at its finest. The playing field and the rules of the game have changed since the 1990s, when Microsoft was dominant. Apple is a major player as is Google. Apple has succeeded because of tight integration and control of both hardware and software. Google went the route of web-based applications and computing in the cloud. Microsoft, the least nimble of the three, has struggled. Giving Windows a face-lift every few years and expecting everyone to upgrade to the new version doesn’t cut it anymore. More and more people are using their phones and tablets as their primary computing devices, platforms that for the most part are not running Microsoft software. Microsoft is putting all their eggs in the basket that predicts that laptops and tablets are going to converge into a single device. I’m not sure they are wrong. Laptop sales have fallen. But I personally still see tablets as devices to consume content (like read eBooks and email, and browse the web), whereas for creation of content (writing blogs like this one, or programming) a laptop is far easier to use. So I end up using both. Apple seems to realize that at least for now both devices play a role, and so they have two operating systems tailored for the two classes of device. Yet their upcoming versions of Mac OS and iOS also show signs of convergence. Clearly having one device to do both jobs would be nice; I just can’t envision what this device would look like.

So competition is back in the computing business, which is good. There are all sorts of directions computing can go at this point. There are a lot of choices. There have been a lot of changes. App stores with small, free or inexpensive apps compete with the old paradigm of expensive bloated, monolithic software programs. It seemed for a while that web-based apps would dominate. These are apps that run in a browser and so are platform-independent. Good idea, especially for developers who only need to write the code once. But despite being a good idea, this is not what consumers want on their smart phones and tablets. They want native apps on each platform. So the developer (I include myself here) is forced to write two versions of each app: one in Objective C (and soon in Apple’s new Swift language) for iOS, and one in Java for Android. Oh well, such is life.

Obviously all these changes have affected health care as well. The Internet of Things — the linking together of smart devices — shows great potential for application to health care. Not only can we monitor our individual activities with devices such as FitBit, but we also have the potential to link together all those “machines that go ping” in the hospital. The hemodynamics monitors, the ventilators, the ECG machines, and so on could be all accessible by smart phone or tablet. Integration of health care technology and patient data is certainly feasible, but, like everything else in health care, innovation is bogged down by over-regulation and the vested interests of powerful players who certainly don’t welcome competition. I hope this situation eventually improves so that health care too can take advantage of the cutting edge of the technological revolution we are experiencing today.

 

Futurama Revisited

GM Futurama exhibit 1964 New York World's Fair
GM Futurama exhibit 1964 New York World’s Fair

Fifty years ago my parents took me to the World’s Fair in New York. The year was 1964. I was twelve years old. It was a turbulent time in American history. The prior fall John F. Kennedy had been assassinated, initiating a long period of turmoil for the United States.  But it was still the era of America’s post-war technological greatness. The country was gearing up to fulfill Kennedy’s vision of a manned flight to the moon before the end of the decade. Products were still made in America, and we used the phrase “made in Japan” as a joke to mean something cheap and junky. People had savings accounts, and there were no credit cards. At the same time, racial discrimination and segregation were widespread. There was cringe-worthy sexism present, as anyone can tell by watching movies or TV shows from that era. There was no Medicare. US poverty levels were at an all time high. Lyndon Johnson and Congress went on to address some of these issues with the Civil Rights Act and the Social Security Act of 1965 which created Medicare and Medicaid. Johnson declared the War on Poverty in 1964 and poverty levels did fall. At the same time an undeclared war in southeast Asia was to cast a large shadow over his legacy and over the lives of boys turning 18 through the next decade.

Nevertheless it was a beautiful warm summer day when we visited the Fair. I remember the day well. Having devoured the Tom Swift, Jr. books and then science fiction of the 3 grandmasters, Asimov, Clarke, and Heinlein, I was filled with boundless optimism about the future of technology. The Fair was crowded with Americans that didn’t look much like Americans of today.  Neatly dressed.  Thin.  I was old enough to notice the pretty teenage girls who were just a few years older than I, working summer jobs at the fair. I remember riding up the elevator in one of the saucer-like observation towers (you know them, they play a prominent role in the movie “Men in Black”) and shyly eying the cute girl seated on a stool operating the elevator controls. Yes, for you younger readers, elevators used to be manually operated. The fair made a lot of predictions, but I don’t think automatic elevators was one of them.

The General Motors pavilion was aptly named Futurama. There is a YouTube video showing what it was like. I waited expectantly in the heat in a long line that stretched around the rectangular concrete windowless building. Inside we sat on cushioned chairs that automatically moved through the exhibit. There were vistas of a technologically rich future. Spacecraft exploring the moon. Scientists controlling the weather from a station in Antarctica. And in the environmentally naive outlook of that era, large machines cutting down rain forests to build roads to deliver “goods and prosperity.”

This exhibit was a highlight of the fair. Afterwards we went to the General Electric pavilion where we witnessed a demonstration of nuclear fusion (was it real? I honestly don’t know, and the Internet is vague about it). There was a loud bang and a bright light.  All very impressive, especially at my young age.

There have been a number of recent articles (e.g.  here, here, and here)  about the Fair and about which predictions it got right and which were wrong. Curiously there weren’t any predictions about medical science that I remember. Maybe I wasn’t paying attention. I think I wanted to be an astronaut back then. Pacemakers were brand new and digitalis and quinidine were staples for treatment of abnormal heart rhythms. The huge advances in medicine that were to come between now and then could not even be imagined.

I remember there was some stuff about computers, but at the time a single computer with less memory and processing power than that in my cell phone filled a large room. And yet it’s amazing that level of computing power was able to get us to the moon. The thought that everyone would carry their own personal computer/communicator in their pocket was pretty far-fetched. A few years later in Star Trek Captain Kirk would use something that looked like a flip-phone, but gosh, no capacitive touch screen! It did have a neat ring tone however.

The networking together of the world’s computers (aka the Internet) was certainly not predicted. Having the knowledge of the world a few mouse clicks away is probably the most significant advance of the last 20 years or so. It has altered our lives, I believe mostly for the good (except when I read YouTube comments), in a fashion unimaginable 50 years ago. I’m disappointed that the exploration of space didn’t turn out as predicted. Where are our moon colonies, or our base on Mars? But I’m happy with the way the Information Age has turned out, and I wouldn’t trade my ability to spend an evening browsing Gigliola Cinquetti videos on YouTube for anything.

The social changes that have occurred since then have been significant and generally for the good. Communism has been marginalized and the threat of nuclear war diminished. Religious fundamentalism remains a thorn in the side of humanity, as it has always been. Certainly there is still sexism and racism and we have further to go in correcting social injustice. But if I had told my dad back in the 60s that the United States would elect a black president, I’m sure he would have said something like “That’ll be the day!”

Doctors On Call

Beep beep beep
Beep beep beep

Taking call is the worst thing about being a doctor. There. I said it. But wait! What about medical malpractice lawsuits? What about dealing with patients’ suffering or dying either from their illness, or far worse, relating to decisions you made or procedures your performed? Certainly these are far worse events than being on call?

Granted. However, these awful events are part of the battle that we signed up for when we made the decision to become doctors. The soldier goes into battle with the attitude that he or she will do everything possible to avoid getting shot or killed, while at the same time realizing these are distinct possibilities. So too doctors leap into the fray with a positive attitude, while similarly realizing that, inevitably, there will one day occur a bad outcome with its attendant soul-crushing consequences. These bad outcome events, similar to earthquakes, occur randomly (stochastically is the term the geologists use). If you live in California you usually don’t spend every waking minute of your day worrying about “the Big One.” So doctors don’t spend all their time worrying about bad outcomes.

I did however spend an inordinate amount of my time worrying about being on call when  I was a practicing cardiologist working for a hospital-owned healthcare system. My life was divided into two phases. Phase one occurred between call nights and was spent worrying about the next call night that was coming up. Phase two occurred when actually on call, and was worse than phase one. The only saving grace of being in phase two was that phase one was coming up soon, which was a relief. In fact, the day after call (especially after a weekend on call) I always had a sense of relative euphoria because call was over, at least until the next time.

What made call miserable? There were many elements. There were the routine calls to reconcile medication orders for newly admitted patients. Mind-numbing but easy. There were calls for clarification of orders that were already perfectly clear. There were the dreaded calls to the Emergency Room, almost always implying a new admission. There were pages for new consults, sometimes with the words “see today” appended, even though it was the middle of the night, and after talking with the nurse I still hadn’t a clue why the consult was deemed urgent. There were the routine admissions for chest pain in the middle of the night for which I would give garbled, sleepy orders, which a helpful nurse would translate into reality, at least until it was required that we enter these orders into our EPIC EHR (electronic horrible record) system directly, removing that last human barrier between sleep-deprived confusion and the patient. Finally there always seemed to be at least one “problem” patient, who was doing worse and worse despite multiple phone orders, resulting in an inevitable visit to the hospital at 3 in the morning.

My practice provided coverage to all the hospitals in Louisville, split between 2 and then 3 doctors on call (the coverage scheme kept evolving as our healthcare group absorbed more and more practices into its fold). Also covering were the cardiac interventionalists, whose on-call night had fewer phone calls, but unfortunately each call proved significant in that it usually led to a rapid trip to the hospital to perform a coronary intervention on a deathly ill patient suffering an acute myocardial infarction (heat attack). My call nights in contrast were characterized by many phone calls (anywhere from 20 to 40 per night) punctuated by occasional trips into the hospital. Although I tried to sleep when I could, I was only intermittently successful, and the sleep achieved was a mixture of sleep phases never intended by nature.

As time went on call got worse. With more practices absorbed, more doctors were added to the call pool, but the number of patients covered also increased. The net result was that the call frequency (about one weeknight a week, and one weekend every 3 or 4 weeks) never really decreased, though the amount of calls that needed to be handled did. So with time the dread of being on call only worsened.

Perhaps it is not widely know that doctors are not paid to be on call. This stems from the masochistic, self-flagellant nature of medicine that is our tradition. In fact if one looks across the generations of physicians, the older generation always looks down on the younger generation of doctors, feeling they have it too easy, saying things like, “if you think you have it bad, when I was training I was on call every other night,” and so forth. In fact just looking at my generation, I recall that at Methodist Hospital in Houston, where I was a cardiology fellow in the early 1980s, the surgical resident in the post-cardiac surgical ICD (this was during the heyday of Michael DeBakey) was on call for 2 months straight! He never left the unit for 2 months. They sent a barber in to cut his hair. I remember seeing him shuffling around the unit from time to time at all hours, looking like a zombie. But I’m sure his elders thought he had it easy (“in my day, we were on call for 6 months straight”). Nowadays house staff associations have brought about reforms, so that actually on call for today’s house staff is easier — uh oh, there I go, proving my point.

Anyway, doctors don’t get paid overtime, or any additional time for being on call. Oh sure doctors make good salaries, and it’s always said that somehow being on call is factored into their salaries. Right. Try that with nurses, cath lab technicians, even your local plumber and see how far it gets you. But doctors do tend to just suck it up and take call, because they have a duty to their patients and there does not seem to be any other system to cover a medical practice 24/7.

But I did hate being on call more than anything, and I am happy to be free of that responsibility. My only advice to my still-working colleagues is that, when the hospital systems that own you start cutting your pay, point out to them all the back hours of overtime they still owe you.

Cloning the Doctor of the Future

Osler contemplating.
Osler contemplating.

There are now so many rules and regulations in medicine that it is difficult for doctors to express any individuality. Like the burgers at McDonalds that are constructed in such a way that they taste the same regardless of your locale, doctors are expected to behave similarly when confronted with similar circumstances.  Or at least that is how the proponents of algorithmic medicine see it. In addition, electronic health record systems create uniformity by enforcing work flows and utilizing pre-defined order sets and note templates. Everyone is supposed to write the same notes, order the same tests, do the same procedures, and make the same decisions.  There is little room any more for Art in the the Art of Medicine.  The Standard of Care has become a razor’s edge.  Minor deviations from the straight and narrow path result in a plunge into the Abyss.  And the Abyss is full of nastiness ranging from medical malpractice suits to accusations of fraud by the Centers for Medicare and Medicaid Services (CMS).

It wasn’t always like this.  At the start of the 20th century we had Osler’s sound advice on the four components of the physical exam: inspection, palpation, auscultation, and, especially,  contemplation. Now the most important part of the History and Physical seems to be the 14 point review of systems. Physicians used to develop patient care plans by combining the knowledge gained from reading the medical literature with the knowledge gained by their personal experience.  Interpretation of the medical literature was tempered by the realization that study results are only generalizable in a limited way to individual patients. The best doctors were smart and experienced.  Today only evidence-based medicine is deemed acceptable.  Theoretically evidence-based medicine can be practiced by a computer or a robot.  It is deterministic medicine, devoid of the human element.  Its practitioners are interchangeable, indistinguishable units, replaceable and expendable — cogs in the broken clockwork that is the Healthcare System.

But patients are complex and evidence-based medical guidelines overly simplistic and mechanical.  Patients are not mathematical entities that can be manipulated by algorithms.  Patients have their own lives, their own priorities, and their own needs that don’t necessarily mesh well with a set of how-to-take-care-of-disease-X instructions. And the algorithms themselves are suspect.  A large number, if not most, of the evidence-based recommendations have an evidence level of “C”, meaning they are based on “authority” (the authors’ intuition?) and not on randomized controlled trials. Despite these flaws, guidelines are not just guidelines anymore.  The Google dictionary defines “guideline” as “a general rule, principle, or piece of advice.”  Unfortunately guidelines generated for doctors have turned into rigid laws. Failing to follow these so-called guidelines can result in a charge of Medicare fraud.

The American Board of Internal Medicine (ABIM) seems particularly interested in reining in doctors. Their Choosing Wisely® program attempts to define medical tests and procedures that waste money or are unhelpful.  Being a creation of the ABIM, the ABIM is very interested in implementing this program.  They want CMS to enforce the Choosing Wisely recommendations with financial penalties. The ABIM, which already has a strangle-hold on physicians with its maintenance of certification (MOC) program, also proposes that perfect scores be achieved on questions related to costs and redundant care as a requirement for board certification. Besides the obvious conflict of interest here, is knowing the cheapest treatment really the most important thing a doctor should know?

With so many meddling busybodies trying to micromanage the practice of medicine, is there any room left for individuality?  Shouldn’t doctors be able to choose the tools and techniques that best suit them individually to achieve their ends without outside interference? Shouldn’t the results of medicine be more important than the process of medicine?  Too many third-parties like the ABIM, the insurance companies, and CMS have made it their business to tell doctors how to do their jobs.  In the Ideal Medical Universe (the IMU — an imaginary alternative-history universe in which doctors have power over their destiny) we doctors would tell them all to back off.  Then we could collectively take a deep breath and spend some time in contemplation.  Contemplation as to how best to regain control of our profession.

 

 

Medical Documentation Should Not be Tied to Billing

Happy EHR users
Happy EHR users

The idea of starting over with computerized Electronic Health Record (EHR) systems and doing them right as mentioned in my previous post has struck a resonant chord. Unfortunately designing an EHR that works may be a fantasy, due to one huge hurdle that would have to be overcome first. But it is fun to imagine an alternative universe where EHR systems were patient-centric instead of being designed to maximize patient billing. Patients ought to be central to the design of EHR systems, just as they should be the focal point of the entire healthcare system. A patient-centric EHR would also be a much easier system to use for physicians than the current billing-centric disaster we are dealing with.

The hurtle mentioned above is the tying of billing to documentation. Like the tying of health insurance to employment in the United States, this is an ill-conceived marriage. Tying billing and documentation together stems from an attempt to make the billing process as granular as possible, to the point that documenting an extra few points in the review of systems results in increased billing. This system has created a cottage industry of coding specialists, but does not seem to have any other real advantages.  There are plenty of downsides.  Documentation becomes a surrogate for the actual work done by the physician. And since the default assumption appears to be that if you did not document something you did not do it, physicians are constantly concerned with whether they are documenting correctly.  Incorrect documentation can lead to over-billing or under-billing and even charges of criminal fraud. The rules for determining proper billing levels are complex and open to interpretation. Like the IRS tax code, medical coding is a huge mess.

A major issue with this system, apart from the neuroses it imposes on physicians trying to bill correctly, is the bloat in documentation that occurs. Current EHRs allow cut and paste and carrying forward of information from previous notes. It is easy to have a template with boiler-plate text inserted about discussion of risks and complications, even if such a thorough discussion never occurred. A few clicks and a complete review of systems appears in the chart, whether or not it was done. The result is a very detailed note, billable at a high level, that may not properly reflect anything about the actual interaction between the physician and patient. The note is large, but the signal-to-noise ratio is small.

All this stems from the present conjunction of billing and documentation necessitated by these very granular billing rules. If billing were not tied to documentation, then its only purpose would be to record information useful for the treatment of the patient. A much shorter note would suffice. The review of systems would not be repeatedly documented by every specialist who sees the patient. Nor would the family or social history, which presumably does not change very rapidly over time. If the physical exam has not changed, it would be ok to write “no change in physical exam.” There is no need to embellish such a statement, other than the current incentive to provide physical exam points for coders to calculate billing.

How could billing be decoupled from documentation? Make it less granular. Instead of 5 office E/M follow-up visit levels, just have one. Sure some visits are longer than others. But it would probably all even out over time and the savings in the cost of documentation and coding would be worth it. Same with hospital visits. One level for new visits, one for follow-up. Procedures also shouldn’t be coded so complexly. A catheter ablation would have one code, regardless of what was done during the ablation. This may strike some as unfair. You wouldn’t get extra credit for an unusually long and difficult ablation, but you also would get more than you really deserved for a nice short, easy procedure. Again the simplification of coding would, in my opinion, outweigh the disadvantages of this system. Think of this in the same way as some have approached simplifying the IRS tax code. A simple graduated tax, with no complicated exemptions or credits, would probably in the long run bring in more money, even if the tax rates were lower, because it would be less costly to apply and it would be harder to game the system.

Having uncoupled documentation from billing, documentation would only need to indicate that you made a visit or did a procedure to satisfy billing requirements. After that, documentation could resume its proper place, recording brief notes about patient progress, changes in history and physical exam, lab tests, diagnosis and treatment. Designing a useful EHR around such a paradigm would be simple. Notes could be handwritten, dictated, or typed on a mobile tablet. Patient information should be in a universal data format, accessible to any involved physician via the Internet. Cloud-based recording of drug and pharmacy data should also be universally available through the EHR interface to doctors, nurses, patients, and pharmacies. Billing would be simple. If you wrote a note on a certain day, you would be credited for a hospital visit, or office visit, or procedure.

I will leave fleshing out the details as an exercise for the reader. If we could somehow loose (and I do mean loose here, grammar nit-pickers) medical documentation from the bonds of billing, a well-designed EHR would be a joy to use.

How to Build a Better EHR

Did McCoy's tricorder have POE?
Did McCoy’s tricorder have POE?

A lot has been written about how awful Electronic Health Record (EHR) systems are. They are overwrought, overengineered, dreadfully dull Baroque systems with awkward user interfaces that look like they were designed in the early 1990s. They make it too easy to cut and paste data to meet billing level requirements, documenting patient care that never happened and creating multipage mega-notes, full of words signifying exactly nothing. They have multitudes of unnecessary, meaningless, oops, meaningful use buttons that must be clicked because the government says so. They have data formats that are incompatible with other EHR systems. Doctors fumble around trying to enter orders using electronic physician order entry (POE). There is terrible user support. And so on. At the end of the day there is decreased productivity, doctors are unhappy, and patients are unhappy. Big Brother in the form of the hospital and the state have more Big Data to look at, but certainly there doesn’t seem to be much benefit to patient care. The major benefit is to the companies that make these proprietary closed-source EHR systems. They get obscenely rich.

But surely there can be benefits to EHR systems? What about the ease of access to the patient’s chart? No more waiting for the chart to come up from Medical Records. In fact, no more Medical Records department at all! Aren’t we saving health care dollars by cutting out those jobs, as well as medical transcriptionist jobs and unit secretary jobs. Surely paper charts were worse?

Doctors should not turn away from information technology. After all, we use all sorts of sophisticated computer technology every day, from the internals of the ultrasound machine, to the software running an MRI scanner, to the recording system used in electrophysiology procedures. There is a role for technology in our record keeping as well.

The problems with current EHR systems are manifold. They are hack jobs, with nightmarish interfaces that obviously were never user tested. They are overly ambitious, trying to do all things and thus doing nothing well. They are ridiculous. I mean, having doctors enter orders directly into a computer — seriously? EHR companies have no incentive to improve their user interfaces, because government mandates requre that they be used no matter how awful they are. Those who don’t adopt these systems are penalized by loss of Medicare dollars.

I think it is an interesting thought experiment to consider how EHR systems would have been designed if they had been allowed to evolve naturally, without the frenzied poorly thought out incentives that exist in the real world. Imagine a world where physicians, the primary users of these systems, drove development and adoption of these systems. Imagine that there were no mandates or penalties from the government to adopt these systems. If a system was developed that improved physician workflow, it would be adopted. Nothing that slowed productivity, as the current EHR systems do, would ever be bought by a practice if the physicians made the call. Imagine EHR companies visiting practices, analyzing work flows, seeing areas that could be improved by computers, and recognizing areas that wouldn’t, at least with current technology. Imagine EHR companines testing their user interfaces using doctors from a spectrum of computer experience, as major software companies like Apple and Google do. Imagine them competing with each other not on how many modules they can provide, but on how few keystrokes or mouse clicks their system used to do the same work as another system. Imagine no government mandates for meaningful use, no dummy buttons that say “Click Me” but otherwise do nothing.

Think about how you would design a system. Certainly it is useful to have old records available online and we would want to keep that. The problem is how to get them there. Having physicians enter data is probably the least efficient way. Dictation and handwriting are still the fastest data entry methods. If Dragon is good enough (I’m not convinced it is) use it, or keep your transcriptionists around. They are very nice people who need jobs anyway. If handwriting recognition is good enough (I don’t think it is yet) use that, otherwise just store the written notes as pictures and be satisfied. In the ideal world, rather than force physicians to become typists and data entry specialists, we would wait until computer artifical intellegence was developed enough to allow the physicians to continue to do things the old way, with the computer processing the doctors’ notes transparently. If the technology isn’t there yet, develop it, but don’t push it on us prematurely.

Medical records primarily should exist to document important information about patients. It should not be primarily a means to ensure maximum billing of patients. If we eliminate that aspect, EHRs become much simpler. I would envision a small tablet that the MD carries everywhere with him or her. Keep the old workflow. Pull up patient records on the tablet. Write notes on the tablet in handwriting, or dictate into it. The tablet transcribes the input and files it appropriately. Need to give patient orders? Select from some templates or write them in. If the software is not good enough to transcribe written orders on a tablet, hire some unit secretaries to do this like they used to. Let them learn the intricacies of computerized order entry, and let the doctor deal with the intricacies of making diagnoses, doing procedures, and looking patients in the eye and grasping their hands when they are ailing — things that doctors do best! Minimize the interactions with the computer and maximize the interactions with the patients!

A good EHR system can simplify drug reconciliation, pull in drug data from patient pharmacies, and automatically identify patients who are being “overprescribed” pain meds. The system can look up recent relevant medical articles, can show appropriate medical guidelines, and can provide sophisticated medical calculators. There are so many good things computers can do for medicine. They’ve gotten an awfully bad rap from the current iteration of EHR systems. I think the technology exists or can exist to do all these good things, but there is no incentive if we remain satisified with the status quo. The current systems don’t do any of these things. They just get in the way.

If we lived in an ideal world it would be time to chuck the lot and start over.

Man of Bronze

James Bama's rendition of Doc Savage
James Bama’s rendition of Doc Savage

Unless you are an initiate, it is difficult to explain the appeal of literature from the era of the pulp magazines. In fact most literary high-brows would insist on putting that word literature into quotes when referring to the pulps. The heyday of the pulps was in the 1930s and 40s. Afterwards they quickly disappeared, replaced by comic books and paperback novels. During their golden era, coinciding with the Great Depression and World War II, they were a major source of entertainment for the people who had to suffer through those bitter times. The novels and stories printed in magazines featuring larger-than-life heroes like The Shadow, The Spider, and Doc Savage were churned out by a relatively small number of authors, who sometimes submitted works to competing publishers by hiding behind multiple pseudonyms. These writers worked under tight deadlines and produced hundreds of thousands of words each month. Under such stressful writing conditions, one does not produce masterpieces. Much of what was published back then is forgettable and forgotten. But some, despite blemishes and warts, lives on.

I wasn’t alive back then (I’m not that old), but was a teen of just the right vulnerable age back in the 1960s when Bantam Books started reprinting the Doc Savage tales, starting with The Man of Bronze in 1964. There is no doubt that the James Bama cover played a big role in my decision to purchase that paperback, and the multitude of reprints that followed. The 1960s were extraordinary years for the rediscovery of adventure and fantastic literature that otherwise might have been forgotten. Nearly all of Edgar Rice Burroughs works were reprinted by Ace and Ballantine Books. Tolkien was published in paperback in three thick volumes by Ace (violating copyright), and then republished again (with an intro by the good professor himself) legitimately by Ballantine Books. There appeared Mervyn Peake’s masterful Gormenghast trilogy. Lin Carter was reprinting fantasy by James Branch Cabell and others. You get the idea. It was a great time to be a teenager.

And so I read the adventures of Doc Savage and his 5 aids, plus his spunky and somewhat troublesome female cousin Pat Savage. Doc was not Superman (though he did have a Fortress of Solitude in the arctic before Superman copied the idea). Doc was human, but trained from birth to become an expert in all fields of knowledge. On top of that he was physically in top-notch condition. His father had, obviously without his consent, submitted him to this training in order to prepare him for a lifetime of fighting crime and evildoers. Doc had his headquarters on a top floor of the Empire State Building in New York. His 5 aides were there to help him, but also provided some comic relief, especially the homely chemist Monk Mayfair and dapper lawyer Ham Brooks. Each adventure (initially they were published monthly, then less frequently) pitted Doc against some master villain, mad scientist or monster. The names of the sagas are particularly evocative. Some examples: The Land of Terror, The Sargasso Ogre, The Thousand-Headed Man, The Annihilist, The Motion Menace.

Doc was conceived by a group of editors at Street and Smith Publications and first appeared in 1933. The author who wrote the majority of the tales and whose name is forever associated with Doc was Lester Dent. Dent used a formula to write the novels, which basically involved getting the hero in as much trouble as possible and then throwing in as many plot twists as possible. In general it works. There are some clunkers (mostly written by the “ghost” writers that Dent hired when he didn’t have time himself) but some of it is amazingly well-paced and well written, for example, the posthumously published The Red Spider in which Doc deals with the communist Soviet Union. As Philip Jose Farmer pointed out in this study of the Doc Savage books, Doc Savage, An Apocalyptic Life, it takes a reading of all of the books (181 original, plus newer ones mentioned below) to flesh out fully the character of Doc. Having read them all, I believe this is true. Doc starts out in the first books as somewhat flat, wooden, and invulnerable. He is not only a perfect physical specimen, but he is aided by various contraptions (such as anesthetic gas pellets) that he carries in a utility vest (much like Batman’s utility belt, clearly based on Doc’s vest), and, truth to tell, he has a good share of luck going for him that keeps him alive from adventure to adventure. As the years go on, and Doc enters the years of the Second World War, his resources seem to dry up somewhat, he becomes less dependent on gadgets, but also becomes more human and more vulnerable. He mentions his unusual upbringing and admits that it has affected him in a negative manner. He knows he is not normal. One wonders what his true feelings are towards his father, who arranged such an abnormal upbringing.

Around 1990 Bantam finished republishing the original Docs, and Philip Jose Farmer wrote a new one, Escape From Loki, published in 1991. Will Murray then took up the mantel. Starting with Python Isle in 1991 he wrote and published 7 more sagas. There followed a hiatus until a few years ago he resumed the series with The Desert Demons in 2011. He has written 8 of these new Wild Adventures of Doc Savage, the most recent as of May 2014 being The War Makers. This includes one cross-over novel with the King Kong universe, Skull Island.

Murray has studied Doc for years, and was acquainted with Dent’s widow, Norma. He is the authorized heir to the Kenneth Robeson name (the pseudonymous house name for Dent and the other Doc writers). His Writings in Bronze is a thick book of essays about Doc and his writers. If you love this stuff like I do, this is required reading, as well as works like the various attempts to fit Doc’s adventures into a chronology by Rick Lai and Jeff Deischer.

Murray does a great job emulating the style of Dent and the other writers from the 1930s and 40s. He makes no attempt to update Doc to the modern era. He is particularly good at coming up with quaint 1930s idioms that no one uses anymore. He emulates Dent’s habit of sometimes starting a sentence with a verb, which makes the action seem to rush a little faster. Instead of “There came a loud explosion,” he would write “Came a loud explosion.” He dutifully pushes all the buttons and rings all the bells when describing Doc and his aides, using phrases that are in all the books, but which anyone who has read every book already knows by heart. Things like Monk being so ugly that women are attracted more to him than to the sartorially splendid Ham, or Renny’s fists being like gallon buckets of flesh and bone, and so forth. All this is comforting when reading these new adventures. Clearly these are the same Doc and company that we know so well.

Murray is basing his new stories on unpublished outlines written by Dent. I am always a little curious about how much creative license is involved here. I remember the so-called “collaborations” between August Derleth and H.P. Lovecraft, in which Derleth would create a 100,000 word novel based on two words Lovecraft had written at the bottom of an envelope (if that). But Dent often farmed out his work to other writers, and did so by writing outlines that the ghost writers fleshed out. So I think Murray is doing nothing more than what the other writers of Doc did. I feel these are legitimate additions to the canon.

The only criticism I have is that sometimes Murray’s writing style is “too good” compared with the original. One of the charms of the original Dent works is the sense of the haste with which these novels were written. There are the occasional grammatical and punctuation errors, or plot inconsistencies. Sometimes these spoil the stories somewhat, but sometimes they add to the feeling of very fast pace that is present, enhancing the excitement. Murray is a very good writer and he does polish his work, something the original pulp writers didn’t have the luxury to do. I am not complaining. The particular, peculiar circumstances that led to the pulps are long gone. Murray has taken Doc and his crew to new places, has introduced new and interesting opponents, and generally has done his utmost to keep the ride going. And this is something I, as a long-time fan, really appreciate.

If you want to check out some of the original Doc Savage novels, or the new ones by Will Murray, or other pulp heroes like The Spider, and a whole lot more, go to www.radioarchives.com. The All-New Wild Adventures of Doc Savage are available in various formats at adventuresinbronze.com.

Stay away from the 1975 Man of Bronze Movie, however.

The Magic of Medtner

Nikolai Medtner
Nikolai Medtner

When I was in college in the late 1960s, early 70s, electronic and avant-garde music was all the rage, at least in my circles. Honestly everyone else listened to Rock, but I was fascinated by what is ineptly named “Classical Music.” In the 20th century, a century of the utmost human drama and scientific progress, there was a notion that music constantly needed to evolve. The problem was that most of the possible harmonic evolution in so-called “tonal” music had already occurred in the 19th century. Starting with Beethoven who bridged the classical and romantic periods, developed by Chopin and Liszt, and culminating with Wagner, the master musical manipulator of emotion, pretty much every possible harmony that made any kind of sense in a tonal system had already been written by 1900. In the 20th century there then were two kinds of musicians: those who took the 19th century harmonic palette and wrote works with it, and those who decided to test the limits of music by going off into new directions. Composers like Rachmaninoff, Ravel, Debussy, Prokofiev, and Bartok are examples of the first type of composer, and Schoenberg, Stravinsky, Berg, Webern, Ives, Boulez, Carter and Stockhausen are examples of the second type of composer. Some will object that the two classes aren’t really separate, that composers like Prokofiev and Bartok were harmonically and rhythmically audacious, and composers like Stravinsky and Schoenberg wrote tonal works early on. The distinction though I would like to make is that the first group of composers never abandoned tonality, while the second group were more radically experimental and wrote truly atonal music.

In the 20th century the composers of the former group were often looked down upon, whereas the second group, the experimenters, were the darlings of the musical world. This attitude led to curious concert programming, with 19th century staples sharing the concert stage with the latest aleatoric piece by John Cage. Nevertheless the listening public always preferred the more tonal pieces, and today the 20th century experimenters in music are the ones neglected, whereas many fine 20th century tonal composers are being rediscovered.

I was very enthusiastic about avant-garde music in college. I liked electronic and musique-concrete experimenters like Edgard Varese and Karlheinz Stockhausen. I wrote electronic music in the Bregman Electronic Music Studio at Dartmouth under the tutelage of composer Jon Appleton.  (Don’t believe me?  Check out the Baker Library Catalog.)  I corresponded with Elliott Carter who was nice and patient enough to write me back a long letter about modern music. My piano teacher was Milton Babbitt’s uncle. Babbitt wrote serial music that serialized not just tones, but dynamics, timing, and so forth. Listening to it brings to mind the quote that Mark Twain was fond of using about Wagner’s music: “It’s better than it sounds.” My piano teacher though had also met Rachmaninoff, and, despite the experimentation of my college years (doesn’t everyone experiment in college?) I never lost my love for tonal music, especially as written by the great Russians: Rachmaninoff, Scriabin (who remained tonal but did develop some unconventional harmnonies), and of course the subject of this post, Nicholai Medtner.

Medtner lived from 1880 to 1951. He was a friend of Rachmaninoff. The two of them corresponded, comparing notes while composing their (Medtner’s second and Rachmaninoff’s fourth) piano concertos. Superficially the two composers have a similar harmonic style. Medtner, perhaps not quite as melodically gifted as Rachmaninoff (was anyone?), exceeds his colleague in the complexity of his counterpoint and depth of his compositions. Harmonically he used the complete palette of romanticism, but did not forge new ground. He does have a very distinctive and recognizable style, much like Rachmaninoff has. (I once heard on the radio a piece I had never heard before, and knew it had to be Rachmaninoff from the style, and it was.) Like Chopin Medtner wrote almost exclusively for the piano. He wrote 14 piano sonatas, many other pieces for solo piano (including the Skazki or Fairy Tales), a few chamber works, and three piano concertos. Medtner recorded some of his works on 78 rpm records at the end of his life. He was a virtuoso pianist on the order of Rachmaninoff (one has to be to play most of his works). Unlike some composers, his works stand up to repeated listenings, in fact it takes multiple listenings to get the most out of them.

My friend on YouTube who goes by the handle itchy2345 has recorded many of the solo pieces by Medtner. She plays beautifully. Here is an example, the Fairy Tale, opus 20, no 1:

You would do well to explore the other Medtner works she has recorded. Medtner is now well represented on YouTube and on recordings. The 3 piano concertos are amazing. The best in the second, particularly the first movement. The way the themes kind of melt into each other at the end of the movement is typical of Medtner. As always, Marc-Andre Hamelin is great at this sort of music:

Unfortunately Medtner died without much recognition (that old story). Fortunately he is now finally getting his due. He is the sort of composer whose music is fascinating and grows on you. If you like classical music of the romantic period, give him a listen.