Category Archives: Medicine

Hacking the QTc

Long QT, torsade de pointes

The QT interval—a measure of the duration of the overlapping action potentials from two billion ventricular muscle cells—has fascinated physiologists since the dawn of electocardiography.  Too long or too short, it can be a harbinger of ventricular arrhythmias and sudden death. Sensitive to electrolytes, drugs, and autonomic tone, susceptible to congenital ionic channel mutations, difficult to measure (which lead? where does it end? what about the U wave?), and markedly varying with heart rate—the QT interval is clinically important and, at the same time, elusive.  To distill the essence of the QT interval and separate out the volatile heart rate dependent components,  the corrected QT interval (QTc) was devised.   Succumbing like everything else to automation, the QTc has become just another number printed in the upper left corner of a digital electrocardiogram, along with the PR and QRS intervals, the QRS axis, and the patient ID. Lulled into complacency by its automatic generation via algorithm (despite the lurking disquiet engendered by the knowledge that the very same algorithm occasionally reads normal sinus rhythm during complete heart block), few bother to ask: Where does that number come from? What formula was used to derive it?  Is the corrected interval actually correct?

For those who care about such questions, the QT can be manually measured and the QTc calculated. Most use the hoary Bazett formula dating from 1920, relating the QT to the square root of the cardiac cycle length. Some are aware of a few other formulas: Fridericia, Hodges, or Framingham. There are many online and native app QTc calculators–in fact my apps EP Mobile and EP Calipers have built-in calculators for all four formulas. There seems to be little need for yet another QTc calculator app.  Nevertheless I have written one, EP QTc, and I should explain how that came about.

Formulas, formulæ

There are more QT corrective or predictive formulas in the medical literature than you might imagine—at least 40.   Rabkin et al. collected 31 of these formulas and worked out a standard nomenclature and classification scheme. Rabkin does not actually give the mathematical equations involved. In fact, nowhere are these formulas collected in a single source.  And what good are formulas if you can’t apply them?  On a whim I thought it would be interesting to write an app that would calculate the QTc using not just one or four formulas, but all the formulas given by Rabkin. The app would also provide details about each formula and statistics and graphs of the results.  I wasn’t sure who would be interested in such an app (probably no one), but at the same time I saw it as a simple project that might make QTc calculating more fun while putting this mass of QT correction literature into perspective. It turns out, it wasn’t such an easy matter.

Paywalls galore

Starting at the beginning, I looked up Bazett’s original article published in 1920. The only online source for the Bazett article is the Wiley Online library.  The site says the article was first published on October 27,  2006.  No, the article is from 1920, and this is a reprint of the original.   According to US copyright law, anything published before 1923 is in the public domain. I’m sorry, but reprinting an article that is in the public domain does not restart the copyright clock.  Nevertheless, the only way to get a digital copy of this historically important article is to pay an extortion fee of $38 to the wily racketeers at Wiley who have managed to kidnap this article and hold it hostage for almost a century.

What was true of Bazett was also true of the vast majority of the articles I was seeking.  The QT correction literature like most science is locked up behind paywalls.  Lacking institutional access and repelled by the idea of shelling out vast quantities of cash for papers many of which were in the public domain, I faced a major obstacle. Fortunately I enlisted some colleagues with digital library access to help liberate these publications, and I eventually managed to get nearly all the primary sources for the different QT formulas.  Beyond these paywalls, there were other lesser hurtles to leap over, but we’ll get to them later.  In the meantime, you may be asking…

What’s wrong with Bazett?

Most every QTc calculator uses the Bazett formula. Why not? It’s simple and can be solved with any device (slide rule or something more advanced perhaps) that does square roots. It was the first QTc formula developed. So why were 30 or more other investigators dissatisfied with Bazett and felt the need to develop their own formulas? What’s wrong with classic Bazett?

Reading the original Bazett article is interesting (though still not worth $38). We travel back to a simpler time when the ECG was relatively new, and the only leads were I, II, and III.  Bazett was interested in the dependence of the duration of mechanical systole on heart rate, and, lo, this particular interval on the ECG, the QT, seemed like a good surrogate to study this. Professor Bazett was able to gather a grand total of 39 healthy subjects, 20 men and 19 women, aged 14 to 53 (though one subject’s age is listed merely as “Boy”) and measure their heart rates and QT intervals.  In some cases individual values were given, in others averages of several values were used. Several  subjects were not his own, but data borrowed from Dr. Thomas Lewis. From this small selection of messy data points Bazett came up with what is still considered the gold standard QTc formula used today:

QTc = QT/√RR.

QTc or QTp?

Well, not exactly. Bazett and most of the early investigators did not create QTc formulas, i.e. formulas intended to give an idealized QT interval independent of heart rate. Bazett and his colleagues were interested in predicting what the QT interval should be at different heart rates. This is the QTp, the predicted QT interval.1 Bazett’s published formula was:

QT = K √RR where K = 0.37 for men and 0.40 for women with units in secs

Similarly the Fridericia formula, also published in 1920 was:

QT = 8.22 ∛RR with units in 0.01 sec

Yes, you read that right. The units are hundreds of seconds. Ugh.

As it turns out one can mathematically convert any QTp formula to a QTc formula, given the assumption that the QTc is independent of heart rate and the QTc equals the QTp at a heart rate of 60. The process is left as an exercise for the reader :).  Later authors took the Bazett, Friedericia and many other QTp formulas and converted them to clinically more useful QTc formulas.

In search of a better Bazett

No one was able to reproduce Bazett’s results. Many authors found that Bazett’s QTc formula tended to overcorrect the measured QT interval at high heart rates, and undercorrect it at low heart rates (e.g. see here). Certainly with such a low N and primitive methodology, Bazett may have mischaracterized the QT vs RR curve. Perhaps the exponent in the formula is not 0.5, or perhaps relating the QT to a power of the RR is not even the right kind of function to use.  The disturbing fact is that each group of investigators who has studied the relationship between the QT interval and heart rate has come up with a different formula.

Linear, power, logarithmic, exponential—oh my!

In reviewing the QT papers, including some studies using 10s of thousands of patients, it is remarkable how inconsistent the findings are with regard to the shape of the QT vs RR curve. Some authors find a straight line, with a linear function underpinning the relationship. Others find curvature at either end of the heart rate spectrum.  The resultant equations are sometimes logarithmic or exponential.

Rabkin uses a classification that I used in the EP QTc app.

ClassificationQTpQTc
linearQT = b + a*RR QTc = QT + a(1-RR)
rationalQT = b + a/RR QTc = QT + a(1/RR - 1)
powerQT = b RR^aQTc = QT/RR^a
logarithmicQT = b + a*ln(RR) QTc = QT - a*ln(RR)
exponentialQT = b + a*e^-RR QTc = QT + a*(e^-RR - 1/e)

(* = multiplication, ^ = raised to the power.  Table modified from Malik et al.)

This table also shows how each QTp formula can be converted to a QTc formula. Any QTp formula can be converted to a QTc formula, so theoretically there are as many QTc formulas as QTp formulas.  Rabkin lists many more QTp formulas than QTc formulas.  Evidently in many cases the conversion has not been considered worth the effort to do.

Typos and unit confusion

Back to the vicissitudes of creating the EP QTc app.  The tale of woe continues with multitudes of typographical errors in the sources and inconsistency of units in the formulas. Typos include mistranscribing formulas in secondary sources (e.g. reading 7 instead of 1 in a tiny exponent), rounding errors, and just plain poor proofreading. I will not mention specific sources, but these types of mistakes seem to be common in the medical literature.  Sure glad we’re paying those publishers all that money for quality control.

As to unit confusion, we already alluded to the use of 0.01 sec as the base unit in the Fridericia formula. Various authors use heart rate as opposed to cycle length in their formulas.  They are inversely related and the use of different terms makes it hard to compare formulas to each other.   Adding to the confusion is that formulas almost invariably use an RR interval measured in seconds, but then sometimes in the same formula require a QT in milliseconds.   Sometimes the units used for the dependent variables aren’t made clear.   Most authors also don’t seem to realize that the results of non-linear QTc formulas aren’t really in units of sec or msec. For example, Bazett QTc units are sec/√sec, i.e. √sec (or worse, msec/√sec).  To be fair, I sidestep this issue in the EP QTc app as well.  To my mind this unit confusion just emphasizes what an artificial thing a QTc is.

Nomenclature

Having obtained sources for all the formulas mentioned in Rabkin (and a few more), I applied Rabkin’s proposed nomenclature. This consists of a 6 letter code for each formula: the first 3 letters QTc or QTp, and the last three based on the first author’s last name. Thus Bazett’s QTc formula is QTcBZT. The Framingham study QTc formula, less well known by its first author (Sagie) is QTcFRM. There are some inconsistencies in the nomenclature which I have tried to correct. For example, Kligfield’s formula is given as QTpKLN in Rabkin, since Kligfield is misspelled as Klingfield. Oh well.

Sex and age

Some formulas differ depending on the sex or age of the subject, or both. The QT interval tends to increase with age and is longer in adult women. So some formulas require entering the age and/or sex. These formulas will simply refuse to give a result if these parameters aren’t present.

A tough question is how to apply QT formulas to subjects that don’t match the study population. I excluded formulas that were derived only from children. All of the study populations are predominantly based on adults, but in a few children were also included. Some studies used men only as subjects.  Is it reasonable to apply a formula derived from data from only men to a woman? In the EP QTc app I avoid such issues and leave it up to the user to deal with this question.

What is normal?

Here is another Pandora’s Box. Just as there are many QTc formulas, there are many papers dealing with establishing the normal QTc. Given syndromes of sudden death related to short QT intervals, both boundaries of normal need to be considered. I have gathered these papers together along with their QT interval cutoffs. These are often sex-specific, and sometimes gradations of abnormality are assigned, e.g. borderline and abnormal, or mildly, moderately, or severely prolonged. In the app the user can select from among these published criteria to define whether a result is normal or not.  In practical clinical use, the QTc interval is only one component in the risk scales needed to establish the diagnosis of long or short QT syndrome.

What about QTp intervals?

By definition a QTp interval is normal. Rabkin proposes that, since QTp formulas were derived from multiple different populations, QT intervals outside the range of all defined QTp intervals may be considered abnormal. I have implemented this algorithm in the EP QTc app. One objection to this approach is that QTp formulas (with some exceptions) give mean values for normal QT intervals.  Thus one would expect the range of normal QT intervals to be somewhat larger than the range of all possible QTp intervals. One should probably take this into account when interpreting the QT vs QTp interval statistics and graphs.

QT library and EP QTc app

All of the data on QTc and QTp formulas have been incorporated into a QTc library. This library is open source and free to use. It can be used with any iOS or macOS project. The library includes functions that make it easy to calculate the QTc or QTp by any formula, using any input (RR or heart rate, sec or msec). In addition information such as references and DOI links, notes, equations, and study populations can be easily assessed. For technical use of the QTc library see the README.

The EP QTc app was originally intended just as a demo app for the QTc library, but it has numerous features making it useful in its own right. Use it to calculate the QTc and QTp using 33 formulas. Graph and do statistics on the results. Copy the results to spreadsheet programs. Options to change precision, sort the results, use different QTc cutoffs from the literature and others are all available.  The source code is on GitHub, and I hope the app will soon be on the Apple App Store.

Finale

I’m not sure who will use the EP QTc app. Maybe no one. It is certainly overkill. If you just want an occasional Bazett QTc it may not be worth it. If you want to explore this minor corner of the literature further, it may interest you. At worst, you can at least impress your friends when you tell them the QTpMRR for your patient.

Some screenshots

Main calculator screen
QTc graph
Statistics screen
QTc results screen
Details screen
QTc limits screen
QT, QTp vs heart rate
Preferences screen
QTp graph

Cutting Down on Coffee

Not coffee

This morning as I write this, there is on my desk a steaming hot cup of fake coffee. The ingredients are roasted barley, roasted malt barley, roasted chicory, and roasted rye. This is the sort of stuff people drank as a coffee substitute during wartime rationing. It smells odd. It is hot and black and looks like coffee. It tastes kind of meh–not bad, not good.  It has a depression era vibe.

As someone whose very life energy used to be fueled by coffee, the transition from coffee to not-coffee was difficult. I drank at least 5 or 6 cups per day. When I was working as a physician I depended on it to keep going. I usually took it black, never added sugar, and completely eschewed Starbucks overwrought concoctions. I loved simple espresso based drinks, particularly Americanos, but, like a true addict, any bottom of the pot leftover coffee would do the trick. But then I was forced to go cold-turkey.

I was having some epigastric pains. The doctor told me to cut out coffee and spicy foods (that’s another saga). So I did.

The day after I quit coffee was filled with headaches and fatigue. The next day was a little better. By the third day I felt fine.

After quitting coffee and a course of omeprazole, my stomach felt better. I also felt pretty good energy-wise sans caffeine. So I cautiously reintroduced some coffee into my life.

I don’t drink it every day. When I do drink it I limit myself to one or two cups. Afterwards I feel a distinct “high” that I hadn’t really appreciated when I was a chronic imbiber. In the past I drank coffee just to feel normal. Doubtless I had built up a tolerance to it. If I didn’t drink it I felt bad.

Now when I don’t drink it I feel normal. When I do drink it I feel a burst of energy. But I don’t need to feel that way all the time. So most of the time I am drinking a coffee substitute or an herbal tea rather than coffee. It works for me.

Your mileage may vary.

The Death of Dr. Shock

Dr. Shock
By Source (WP:NFCC#4), Fair use, https://en.wikipedia.org/w/index.php?curid=38480846

The call came from one of my attendings at night during my cardiology fellowship. It had a touch of the black humor that medical persons don’t like to admit bubbles up to the surface from time to time.

“You know Dr. Shock, the guy on TV? He’s being transferred. He’s having a big infarct and is in cardiogenic shock.”

I was at home. I quickly pulled myself together and got into my car to drive to the hospital. During the drive I reflected on the call.

Of course I knew who Dr. Shock was. He was a staple on local Philadelphia UHF television. Back in the 1960s and 70s, before cable TV with its hundreds of channels, there was just broadcast TV. In Philadelphia I still remember the channels: 3 (NBC), 6 (ABC), 10 (CBS), and 12 (PBS). However, beyond this VHF set of channels there was also UHF TV. Instead of the usual rabbit-ears antenna, these channels used a circular antenna. They also tended to be fuzzy and staticky. The shows were low budget and local, but well worth watching after school as a kid growing up in the Philadelphia suburbs. Local TV personality Wee Willie Webber introduced me to Ultraman and 8th Man on his show. Sally Starr presented Popeye cartoons and Three Stooges shorts.  Dr. Shock hosted Horror Theater while prancing around in a Dracula get-up and presented old black and white monster movies. He was a funny, silly host, defusing the scariness of the movies in a tongue-in-cheek manner that later hosts, like Elivra, Mistress of the Dark, and Joel and Mike in Mystery Science Theater 3000 would come to perfect. So, yeah, I certainly knew who Dr. Shock was.

When I saw him in the hospital, I myself was shocked. This was a young looking man. Without his makeup, he didn’t at all resemble TV’s Dr. Shock. I found out his real name was Joseph Zawislak. He was just 42 years old. He was in the CCU with a big MI and low blood pressure. He shook my hand and was polite, dignified, and deferential. “Do what you can, Doc.” I had been directed by my attending to place a Swan-Ganz catheter.

This was 1979. I was a first year cardiology fellow. There wasn’t a whole lot we could do for someone in cardiogenic shock from a big myocardial infaction back then. It was the dawn of the thrombolytic and angioplasty age and those treatments were not readily available. Infact size limitation was all the rage, using nitrates, balloon pumps, and various magic potions. Practically speaking though, a large infarct with cardiogenic shock was usually a death sentence.

So it was that poor Dr. Shock arrested that night and couldn’t be resuscitated. Now, almost 40 years later, after so many forgotten patient interactions, I still remember him and that night clearly.

George Mines and the Impermanence of Knowledge

George Mines

It was a chilly Fall morning in Montreal. A Saturday, the campus of McGill University was quiet. Students, not much different in 1914 from those of today, were sleeping off their Friday night activities. A cleaning woman entered the Physiology Laboratory to dust the glassware and wash the floors. As she turned a corner she was startled to see a young dark-haired man, sitting in a chair. She recognized Professor Mines, the handsome English scientist whom she had often seen working in the laboratory at odd hours. He appeared to be sleeping. His shirt was open and a strange apparatus was strapped to his chest. Rubber tubing stretched from this apparatus to a table filled with equipment next to him. A smoked paper drum rotated slowly. The needle of the drum was motionless, then suddenly jumped. Startled, she let out a little gasp. “Professor, Professor,” she called out. “Are you alright?” She noted he looked very pale, deathly so. She touched his hand. It was cold.

She ran to get help. The police took Professor George Mines to the hospital. There he briefly regained consciousness, but not long enough for him to explain what had happened. He died later that day. He was 29 years old. During his brief life, he used animal models to describe the physiology of reentry in the heart. He described the mechanism of supraventricular tachycardia in Wolff-Parkinson-White Syndrome long before that syndrome was described. He used a telegraph key to deliver timed electrical shocks to rabbit hearts, inducing ventricular fibrillation which he described without the benefit of an electrocardiogram. He thus was the first to report the existence of the ventricular vulnerable period. Despite all this amazing work, much of what he discovered was little noted at the time, until “rediscovered” by later researchers.

It seems likely that he was the first to induce arrhythmias in a human, long before the field of clinical cardiac electrophysiology. Unfortunately that human was himself, and the result was his own death.

The published papers of George Mines are fascinating to read. His very primitive equipment by today’s standards was more than compensated for by his remarkable ingenuity and keen powers of observation and reasoning.  He described the relationships between conduction velocity and refractoriness in reentry, the existence of an excitable gap, and deduced the reentrant nature of ventricular fibrillation. In one memorable experiment he cut fibrillating tissue into larger and larger loops until he was left with just one circulating wavefront. Amazing stuff! What more would he have accomplished had his life not been cut short?

Back in the days before the Internet, I used to keep photocopies of medical articles in a file cabinet (actually several large file cabinets). In those days of academia I enjoyed going to the stacks of the medical library and randomly reading articles from old bound journals, some dating back to the 19th century.  I learned a lot.  One thing I learned was that science has a problem with collective amnesia.  Discoveries are often forgotten or ignored, only to be rediscovered years later.

Nowadays everything is online. Or is it? Recently I wanted to look up Bazett’s original article on correcting the QT interval for heart rate. It was published in Heart in 1920 (Bazett HC. (1920). “An analysis of the time-relations of electrocardiograms”. Heart (7): 353–370.) These old volumes of Heart have not been digitized and are not online. Such a famous article though is surely reprinted? Indeed it is, on the Wiley Online Library site. I can get a copy of the PDF for $38. Absurd! An article from 1920 costs $38!

Here we see the bitrot of science, the impermanence of knowledge. On the one hand, modern scientific research is largely hidden behind a paywall, so that the poor (in the financial sense) reader must rely on abstracts, news reports, online sites such as Medscape, and presentations at medical meetings to keep up-to-date, instead of a careful reading of research methods and results. On the other hand, our precious scientific heritage, the published papers of previous generations, remains largely undigitized, residing in the dusty stacks of libraries, increasingly ignored by newer generations to whom nothing matters if it is not online. There are some exceptions. The Journal of Physiology has digitized all of its content back to Volume 1 from 1878. But most publishers haven’t bothered doing this.

At least half of early films have been lost. Early TV archives, like those of Dr. Who were routinely destroyed or copied over, resulting in loss of these shows forever. The situation is not so dire with old scientific research. The libraries will remain for a long time, and paper has a good half-life. But the beautiful work of George Mines and those like him, the true pioneers of medicine, will remain largely obscure to future generations unless that work is available online.

Perhaps some portion of the $38 for a PDF copy of a 1920 article could go to that cause.

The Smartphone is an Essential Medical Instrument

The storage capacity of the human mind is amazing. One estimate of the size of the brain’s “RAM” is as high as  2.5 petabytes (a million gigabytes). The number is based on the total number of neurons in the brain and the total number of possible connections per neuron. I suspect it is an overestimate, given the vagaries and innate inefficiency of biological systems. Nevertheless the true figure is undoubtedly impressive. But not infinite.

There are well-documented feats of human memory and calculating prowess. Ancient Greeks could memorize and recite the epic poems of Homer. Indeed this was how the Iliad and the Odyssey were passed down for generations before the Greeks acquired writing. Savants can quickly perform cube roots of long integers or have memorized pi to over 20,000 decimal places. Musical prodigies like Mozart or geniuses like Einstein impress us with the capabilities of their brains. Yet for the average person who has trouble memorizing a shopping list, these stellar examples of mental fortitude provide little solace. The old myth that we are only using 10% of our brain capacity has been debunked . So unless you’re willing to believe the combination kelp-Ginkgo-biloba-blueberry supplement you heard about on the radio is really going to work, you are pretty well stuck with the brain and memory capacity you have right now. At least until things get worse as you get older.

While the brain’s capacity may increase due to evolutionary forces over the next few thousands years (or not, see the movie Idiocracy), the amount of information that it is required to hold is not constrained by such a slow process. According to one source , there are now over 50 million scientific publications, with about 2.5 million new articles published each year. There is a 4-5% increase in the number of publishing scientists per year. No one can absorb all this. The days of the “Renaissance Man” who could quote Bulwer-Lytton while relating the latest experimental data from Maxwell and then play a Bach fugue while giving a dissertation on Baroque counterpoint are long gone. So what’s a 21st century scientist (or physician) to do?

One thing we should not do is to attempt to memorize everything. It is important to off-load as much information from our brains as possible. Our brains need to be more like an index than a database. We need to know what information we are looking for and where to find it. Information that we use all the time is automatically memorized and we don’t have to look it up. But a lot of information that we don’t use frequently is better off external to our brains. As long as it is easily retrievable, it will be available. Better to look something up that we are unsure about, such as a drug dose, than hazard a guess and be wrong.

Fortunately we live in an era when we can implement this strategy very easily. We carry smartphones that are constantly connected to the Internet. All the data we need is at our fingertips and incredibly easy to look up. Similarly we can store data on these devices for later retrieval. This constant availability of information makes life easier for doctors and undoubtedly makes for better patient care because of decreased mistakes due to memory errors.

There are those who would argue that relying on these devices is a crutch, and any good doctor wouldn’t need them. What would happen if a doctor’s plane crash landed on some remote island, where there were no charging ports? How could that doctor function?

I think it’s time to put aside such nay-saying and embrace our digital assistants. These devices are our tools, as essential to modern medicine as ultrasounds, blood tests, and MRI scanners. Take away any of these tools, and doctors will be limited in what they can do. We should be proud of the impressive technology that allows us to carry powerful computers in our pockets, and we shouldn’t be ashamed to use them.

Notwithstanding the above, medical board certification is still old-school, rooted in that outmoded 19th century Renaissance Man philosophy that doctors should hold everything in their heads. Certainly some medical board questions are practical and test things all doctors should know. But thrown into the mix are a lot of obscure questions about obscure facts that may be difficult to regurgitate during a testing session, but would be easy to look up online in a few seconds in a real-world setting. So, do these tests actually test one’s abilities as a real-world practicing doctor armed with modern information technology or are they just a particularly arcane version of Trivial Pursuit?

I’ll leave the answer to this question as an exercise for the reader.

EHR Copy and Paste Considered Harmful

DRY principle – Don’t Repeat Yourself

How bad are Electronic Health Record (EHR) programs? Let me count the ways. Rather, let me not, as I and many other folks have already done so. Even non-tech savvy doctors (of which there are fewer and fewer) realize something is wrong when they compare their experience using an EHR with virtually every other computer program they come across, such as the apps on their phones. As the click counts required to do simple tasks mount up and repetitive stress injury of the hand sets in, even the most sanguine of medical personnel will eventually realize that something is not quite right. And as EHR companies forbid sharing of screenshots of their user interfaces, you’ll just have to take my word for it these UIs are, let us say, quaint. Hey EHRs, the 90s called and want their user interfaces back.

In this post I’ll point out just one of the many problems with EHRs: EHRs violate the DRY principle.  The acronym DRY is familiar to computer programmers, but not to most medical people. DRY stands for “Don’t Repeat Yourself.” In computer programming it means don’t write the same code in two or more different places. Code duplication is what some programmers refer to as a code “smell.” There is no reason to duplicate code in a computer program. A single block of code can be called from multiple procedures.  There is no reason for each procedure to have its own copy of this code block.   Code duplication leads to code bloat and code rot, where two procedures supposed to do the same thing get out of sync with each other because of changes in one copy of the duplicated code and not in the other.

Applying the DRY principle to a database requires that every item of data has a single location in the database. Multiple copies of the same data increase the size of the database and invariably cause confusion. Which copy is the original? Which copy is the true copy when copies disagree?

An EHR program is at root a gigantic database. Ideally Patient Smith’s X-ray report from 1/1/2017 is filed away properly in the database and easily retrieved. Same with his blood work, MRI results, etc., etc.

Enter Copy and Paste.

Copy and Paste is evil. Unlike Cut and Paste, Copy and Paste’s close cousin that moves data around without duplication, Copy and Paste is bad, lazy, and sloppy.  Copy and Paste needlessly duplicates data. Copy and Paste violates DRY.

EHR notes are rife with Copy and Paste. X-ray reports are copied and pasted. Blood work too. Even whole notes can by copied and pasted. It is easy to copy and paste a prior progress note and then make a few changes to make it look like it wasn’t copied and pasted. Everyone does it.

Many EHR progress notes fall just side short of novel length. Whole cath reports, MRI results, other doctor’s notes, kitchen sinks, and other potpourri are thrown in for good measure. Usually with a bit of skillful detective work one can determine the minor fraction of the note that is original. Usually it is last line. Something like: “Continue current plans.” These could be the only words actually typed on the keyboard. Everything else is just copied and pasted.

So you get all the downsides of DRY: bloated notes, duplication of data, possible inaccuracies and synchronization problems. The X-ray report may be revised by the radiologist after it is copied and pasted into the note. Nevertheless the unrevised report persists forever sitting as a big blob of text in the middle of a now inaccurate note. Of course there is some consolation that no one will ever read the whole note anyway, with the possible exception of a malpractice lawyer.

Why is Copy and Paste so prevalent in EHR notes? It isn’t just laziness. Like the pulp fiction writers of the 30s, doctors are effectively paid by the word, so that the longer the note the better. Longer notes reflect higher levels of care, more intricate thought processes, more — wait a minute! No they don’t. Longer notes reflect mastery of Copy and Paste, something that’s not too difficult to master. Even non-tech docs seem to have no trouble with it. Long notes are a way to justify billing for a higher level of care, i.e. more dollars. Since the Powers That Be Who Control All of Medicine (i.e. not doctors) decided that billing would not be based on what doctors do, but on what doctors write in the chart, it doesn’t take a crystal ball to predict that note bloat, electronically enhanced, would be the inevitable outcome of such a stupid policy.

What are the alternatives to Copy and Paste? The best is the use of hyperlinks, something that you might be familiar with if you ever use something called the World Wide Web. If I want to put a YouTube video on my blog, I don’t copy the video and paste it here, I just provide a link. Similarly, if you want to refer to an X-ray report in a progress note it should be possible to just provide a link to it. Something short and sweet.

Of course the example note I referred to above would be reduced in length to just a number of links and the sentence “Continue current plans.” This will hardly satisfy the coders and billing agents and whoever else is snooping around the EHR trying to find ways not to pay anyone (i.e. insurance companies). Nevertheless these shorter notes would be much easier to digest and might even encourage a doctor to elaborate a bit more in his or her own words on the history, physical, diagnosis, and plans. Unlinking billing and documentation would go a long way towards making EHR notes more manageable and informative. No one seems to keen on doing this however. Documentation as a proxy for care  is just one of many broken pillars of the Byzantine edifice known as the American Health Care System.

[note: the title refers to a famous (in computer circles) 1968 letter by Edsger Dijkstra entitled “Goto Statement Considered Harmful.” It has inspired tons of computer articles with similar titles, including this one.]

Do No Harm

Cardiac neuroses are often iatrogenic in origin. A well-meaning but careless comment by a physician can change a person’s sense of well-being in an instant. The effect can be permanent and devastating. Many clinicians who complain about overly anxious patients don’t appreciate their own role in the genesis of this problem. Our words matter. They can reverse the good we do with our medications and procedures.

If you are a heart rhythm doctor, the scenario is familiar. Your patient (we’ll assume a male for the sake of pronoun economy) has premature ventricular complexes (PVCs). Not a lot of them, but he feels every one. They are intolerable. There is no underlying structural heart disease. These are benign PVCs. The treatment options are not good. Drugs have side-effects that range from annoying to life-threatening pro-arrhythmia. Catheter ablation offers the possibility of “cure,” but is not a sure thing and has its own set of risks. The PVCs aren’t very frequent and perhaps will disappear with sedation during the procedure. Even if they don’t and they can be mapped, how far should they be pursued? What if they are epicardial in origin? Should we really consider placing a catheter directly into the pericardial sac and ablate near a coronary artery to treat benign PVCs?

Reassurance is the best treatment. You tell the patient that these PVCs are benign. You say that many people have PVCs, even more frequent than the patient has, and that most people aren’t even aware they have them. You tell your patient that there is no underlying heart disease, that these PVCs will not shorten his life, and that the treatments are likely to have side-effects or unwarranted risks. But it doesn’t matter to the patient. His palpitations are incapacitating. He can’t do his job when they come on. He has read a lot about PVCs and has seen several doctors before coming to you, the arrhythmia expert. He wants something done.

You stall. You ask the patient to try a different beta-blocker than the ones he has tried already that haven’t worked. You say you need to get some of his medical records from his other doctors. You want to review his Holter monitors. You need to make sure there is only one PVC focus if ablation is being considered as a treatment option. Mostly you are uncomfortable recommending an aggressive approach and want to put off making a decision.

Six weeks later the patient is back in your office. The new beta-blocker didn’t work. Surprise, surprise. He has read the information you gave him about ablation and wants to try it. He is desperate. He is willing to take the risk.

You look at the patient. He is in his mid 30s. He is an executive, type-A personality. You have seen his type before. But you are curious about something.

“When was the first time you found out about your PVCs?” you ask.

The story comes out. It was about 5 years ago. One of his friends at work had gone and gotten an “executive physical” that was being offered by one of the cardiology groups in town. It was a nice deal. There was a physical exam, they checked your cholesterol, and you ran on a treadmill for a few minutes. Afterwards there was orange juice and bagels. So he signed up for it.

During the treadmill the technician seemed a little nervous. Before he got too far into it, the technician stopped the test. You have an irregular heart beat, he was informed.

This was news to your patient, who had always assumed his heart was just fine. But the technician told him that he should refrain from any strenuous activity and needed to see one of their specialists about the irregular heart beat. In the meantime, a 24 hour Holter monitor was put on and he was sent home.

The monitor was turned in the next day, and he waited nervously for the result. That night, he was awakened from sleep by a phone call. The doctor on-call had gotten a call from the monitoring service. The Holter monitor had shown a critical result. During sleep, your patient had had 3 PVCs in a row. The monitoring service deemed this ventricular tachycardia and dutifully informed the on-call doctor of this “critical” result. The doctor was obliged to call the patient, whom he didn’t know. Not knowing if this was a patient with end-stage cardiomyopathy and ejection fraction of 10% or someone with a perfectly normal heart, the doctor on-call felt it was the better part of valor to assume the worst.

“You are having runs of ventricular tachycardia on your monitor,” he told your patient. “This is a life-threatening emergency. Your heart could stop and you could have cardiac arrest. You need to call 911 and get to the hospital ASAP.”

After hanging up, the on-call doctor rolled over in his bed and went back to bed, knowing he done his job, making sure a patient with a potentially life-threatening problem would take it seriously and get to the emergency room. But for your patient, life had changed forever. Even after a full workup that showed no structural heart disease, he couldn’t get it out of his head that his heart rhythm was unsteady. His heart was unreliable. He could die at any time. He had never paid attention to his heart beat before, but now he could feel the irregularity, the strong beats that told him he was having more PVCs. They were driving him crazy. Crazy to the point he would consider having a doctor insert a catheter into his heart and burn away some of his heart muscle to get rid of them.

This story is not an exaggeration. I have seen something like this happen many times, with patients who have generally benign conditions like PVCs or supraventricular tachycardia, or somewhat more serious problems like atrial fibrillation. Patients with heart conditions are worried that what they have will kill them. They know about heart attacks and cardiac arrest, but they are not as well-informed about lesser cardiac conditions that are not life-threatening. Apparently some doctors are equally poorly informed, or just think they are doing their duty by scaring the hell out of patients in order to get them to do their bidding, whether it is to go to the emergency room or take some medicine or do some procedure. The problem is magnified by the disappearance of long-term patient-physician relationships. Patients are at the mercy of the on-call schedule, and rarely get good advice when they are called with the result of some lab test in the middle of the night by a doctor who doesn’t know them.

What to do? Be careful what you say to patients, especially those you don’t know well. Think about how your would react if you were told the same thing. Don’t use your authority as a physician to bully a patient to do what you thing is “the right thing.”

Choose your words carefully.

Thoughts on Mark Josephson

I’m sure there will be plenty of tributes to Dr. Josephson in the next few days from his colleagues who knew him well and those who didn’t know him personally but learned so much from his books and articles. I fall somewhere in the middle. I wasn’t one of his students at Penn who learned from him directly. I did meet him several times. I did work for years at the University of Colorado with Alden Harken, the surgeon with whom Dr. Josephson developed the “Pennsylvania Peel” — endocardial resection, the first surgical treatment for ventricular tachycardia. Oh, and I did live in the same apartment Mark used to live in during my cardiology fellowship in Philadelphia in the 1970s. More on that later.

Mark Josephson may represent somewhat of a dying breed in academia. In the great academic triad of clinical care, research, and teaching, the last element, teaching, which makes the least money for institutions, is emphasized less and less. Dr. Josephson excelled as a teacher. A lucky few were able to experience his teaching skills first-hand. A far greater number learned from his writing, in particular, from his opus magnum Clinical Cardiac Electrophysiology. Originally a relatively small but densely written book in a red binding, subsequent editions were more massive, filled with page after page of painstakingly labeled intracardiac recordings and clear-cut explanations of obscure electrophysiologic phenomena. I cut my teeth on this book, reading the original through when I was an EP fellow in Houston, and then reading the 2nd edition straight through when preparing for my first EP boards.

The book was important because it set a standard for analysis of intracardiac recordings that inspired subsequent researchers and students of the field. Back in the 70s and 80s, the mechanisms for most major arrhythmias (with the exception perhaps of atrial fibrillation) were worked out solely by analysis of intracardiac recordings and a few pacing techniques. Mark Josephson was instrumental in this process. Back then, working on arrhythmia mechanisms was the important thing. Therapies for ventricular tachycardia were drugs like quinidine or procainamide, and EP-guided drug therapy was, in retrospective, a pseudoscience. Yet working out the mechanisms of WPW syndrome, supraventricular tachycardia, and ventricular tachycardia eventually led to effective ablation and device therapy in the 1990s and beyond.

Dr. Josephson, who along with a cadre of first-generation EP superstars trained by Dr. Anthony Damato (the “godfather” of EP) at the Staten Island Public Health Hospital, set a standard for teaching in the field of electrophysiology that was often emulated, but never matched. Moreover he wrote a number of incisive editorials over the years in an attempt to keep the field rooted in its scientific basis, rather than be swept away by the insidious influence of industry or the idea that it wasn’t necessary to understand the pathophysiology of an arrhythmia if you were just going to burn it away.

As mentioned above, I was lucky enough to meet him on a few occasions and to round with him. By coincidence we discovered that the apartment on Henry Avenue in Philadelphia where I lived when I was a fellow was the exact same apartment he had lived in several years before. He remembered well the old guy who lived one floor above us, a fellow by the name of Sullivan, nicknamed “Sully.” I was just a plain cardiology fellow when I lived there, only subsequently deciding to go into EP and move to Houston for training. I always wondered if I picked up some kind of EP karma from living there. Who knows?

The advances in diagnosis and treatment of arrhythmias that have occurred since the 1970s are extraordinary, and uncounted numbers of people have benefited from these advances.  It seems a shame that most lay people, saddened at the loss of actors, musicians, sports heroes, and other celebrities, have no knowledge whatsoever of the passing of people who have actually had much more impact on their lives, like Dr. Josephson.  So it’s up to us, his colleagues, to remember Mark Josephson and give thanks for his incredible contributions to medicine and the world.

A Tale of Two Histories

Compare the following two versions of the same medical history:

Version 1

CC: chest pain
Mr. Smith is a 57 y/o white man who comes into the office today for the first time with a complaint of chest pain. He states he has been in generally good health in the past, though he has smoked about 40 pack-years and admits to not exercising much, other than occasional games of golf. He has trouble keeping his weight down. He has been a middle-level manager for many years, but about a month ago changed jobs and took a pay cut. He says this has been quite stressful. He has changed jobs before, but states “I’m getting too old to keep doing this.” About 2 weeks ago he started noting some mild heaviness in his chest, lasting up to 5 or 10 minutes. He attributed this at first to eating heavy meals at dinner, but now thinks it occurred after climbing stairs following meals. He took some Tums, but was not sure if the pain eased from this or just from resting. These episodes of discomfort were localized to his anterior chest, without radiation or other associated symptoms at first. Over the last 2 weeks he thought that they were getting a little more frequent, occurring up to twice a day. Two days before this visit, he had an episode of more intense pain that woke him up from sleep at night. This episode lasted about 15 minutes and was associated with diaphoresis. “My pillow was soaking wet.” He woke up his wife who wanted to call 911, but he refused, though he agreed to make this appointment somewhat reluctantly. He has had no further episodes of chest pain, and feels that he is here just to satisfy his wife at this point. He generally doesn’t like to come to the doctor. He doesn’t know his recent lipid levels, though he says a doctor once told him to watch his cholesterol. His BP has been high occasionally in the past, but he attributes it to white coat syndrome: His BP is always normal when he uses an automatic cuff at the store, he claims. He is on no BP or lipid-lowering meds.  He takes a baby aspirin “most days.”  His parents are deceased: his mother had cancer, but his father died suddenly when his 40s, probably from a heart attack, he thinks.

Version 2
  • Mr. Smith
  • CC: chest pain
  • Age: 57 y/o Sex: M Race: Caucasian
  • Onset: 1 month
  • Frequency: > daily [X] weekly [ ] monthly [ ]
  • Location: Anterior chest [X] Left precordium [ ] Left arm [ ] Other [ ]
  • Radiation: Jaw [ ] Neck [ ] Back [ ] Left arm [ ] Right arm [ ] Other [ ]
  • Pattern: Stable [ ] Unstable [X] Crescendo [X] Rest [X] With exertion [X]
  • Duration: < 15 min [X] 15 min or more [X]
  • Risk factors: Tobacco [X] Family history CAD [X] HTN [?] DM [ ] Hyperlipidemia [?]
  • Relief: Rest [?] Medications [?] Other [ ]
  • Associated symptoms:  N, V [ ] Diaphoresis [X] Dizziness [ ] Other [ ]
Which is better?

Version 1 is an old-fashioned narrative medical history, the only kind of medical history that existed before the onset of Electronic Health Record (EHR) systems.  This particular one is perhaps chattier than average.  It is certainly not great literature or particularly riveting, but it gets the job done.  Version 2 is the kind of history that is available on EHR systems, though usually entry of a Version 1 type history is still possible albeit discouraged.  With an EHR, entering a long narrative history requires either a fast, skilled physician typist, or a transcriptionist — either human (frowned upon due to cost) or artificial, such as Dragon Dictation software.  This latter beast requires careful training and is frustratingly error-fraught, at least in my experience.  The Version 2 example is not completely realistic.  In practice there are more check boxes, more pull-down lists and other data entry fields than can be shown here.  But you get the idea.

Version 2 seems to have a higher signal to noise ratio than Version 1.  It’s just Version 1 boiled down to its bare essentials, stripped of unnecessary verbs, conjunctions, prepositions, and other useless syntax.  It contains everything a medical coder, a medical administrator, or a computer algorithm needs to do his, her, or its job.  It has taken the medical history, the patient’s story, and put it into database form.

But Version 1 is not just Version 2 embellished with a bunch of fluff.  Certainly Version 1 is more memorable than Version 2.  There is a chance the physician who wrote Version 1 will remember Mr. Smith when he comes back to the office for a follow-up visit: Mr. Smith, that middle-aged fellow who was stressed out when he took a pay cut while starting a new job and started getting chest pain.  Another physician meeting Mr. Smith for the first time might after reading this history modify his tactics in dealing with Mr. Smith.  One gets the impression that Mr. Smith is skeptical of doctors and a bit of a denier.  Maybe it will be necessary to spend more time with him than average to explain the need for a procedure.  Maybe it would be good to tell his long-suffering wife that she did the right thing insisting that he come in to the doctor.  All this subtlety is lost in Version 2.

There are some cases where Version 2 might be preferable.  In an Emergency Department, where rapidity of diagnosis and treatment is the top priority, a series of check boxes saves time and may be all that is needed to expedite a patient evaluation.  But for doctors who follow patients longitudinally, Version 1 is more useful.  A patient’s history is his story: it is dynamic, organic, personal, individual.  No two patient histories are identical or interchangeable.  Each history has a one-to-one correspondence with a unique person.  A good narrative history is an important mnemonic aid to a physician.   A computer screen full of check boxes is no substitute.

While the Version 2 history was designed for administrators, coders, billers, regulators, insurance agents, and the government, the Version 1 history was designed by doctors for doctors.  We should be wary of abandoning it, despite the technical challenge of its implementation in EHR systems.

 

Massive Heart Attacks

Google Ngram of the phrase “massive heart attack”

Carrie Fisher’s sad, premature death is an occasion to reflect upon the poor job the news media does in reporting medical news. The initial report from TMZ had the headline “Carrie Fisher Massive Heart Attack on Plane.” If one equates “heart attack” to the more precise medical term “myocardial infarction,” as is usually done, then this is certainly diagnostic overreach on the part of TMZ. From their report it appears that Fisher suffered a cardiac arrest; indeed that term is used in the body of the article. So why not use that term in their headline? Perhaps massive heart attack sounds more dramatic. The word “massive” seems to go naturally with “heart attack.” Try to think of other phrases in which massive fits so well. Massive hack? Massive debt, perhaps? Few phrases roll off the tongue as well as “massive heart attack.” But most of the time when used by the media this phrase is not at all accurate.  Rather it is a catch-all term to indicate something serious related to the heart has occurred.

Of course we don’t know exactly what happened to Carrie Fisher, nor is it any of our business, but none of the information available indicates that she had a large myocardial infarction as opposed to a primary arrhythmic event like ventricular fibrillation or ventricular tachycardia. As a cardiologist having seen this sort of event a depressingly large number of times it is possible to speculate on what happened.  She likely suffered a cardiac arrest related to an abnormal heart rhythm starting suddenly in the heart’s ventricles.  Lay persons and the media often refer to this as the heart “stopping.”  While the pumping of the heart stops or is reduced, in actuality the heart is beating very fast or in a disorganized fashion to the point where it can’t effectively pump blood.   Without rapid correction using an electrical defibrillator this leads to sudden death.

In Carrie Fisher’s case CPR was administered while the plane was still in flight. It is unclear how much time elapsed between the onset of the cardiac arrest and administration of CPR.  It is difficult to tell from the reports if an AED was used on the plane or if defibrillation was attempted only after the plane landed.   We know she never regained consciousness and most likely suffered brain death due to prolonged interrupted circulation.

Carrie Fisher was a cigarette smoker and used cocaine, at least during her Star Wars days.  Could heart disease caused by smoking and drug use have contributed to her sudden death? Could more recent use of drugs like cocaine have been a factor? We don’t know, but if the family deems it fitting that the circumstances of her death be made public, it might help educate the public and the news media on some of the nuances of heart disease and the difference between a “massive heart attack” and a cardiac arrest.

Finally it is interesting to examine some of this lay cardiac terminology using Google Ngrams. The Google Ngram site is a search engine that can be used to look up the frequency of words or phrases in thousands of books published over many years. It can help establish when certain phrases like “heart attack” or “cardiac arrest” were first used and when they became popular. The Ngram at the top of this post of the phrase “massive heart attack” shows the rise in popularity of this phrase over the last 50 years. The Ngram below compares the terms “heart attack”, “myocardial infarction”, “sudden death”, and “cardiac arrest.” It is interesting that “sudden death” is a term that has been used without much change in frequency since the year 1800. “Myocardial infarction” and “cardiac arrest” both entered the literature around 1930-1940. “Heart attack” dates back to around 1920, but has become more and more popular, while the medical term, “myocardial infarction” seems to be less used recently. Curiously although the phrase “heart attack” has been around since the 1920s, it is only since 1960 that the phrase “massive heart attack” has become popular.  One wonders why.  These kinds of results are open to all kinds of interpretation: I’ll leave that to the reader as an exercise. But I encourage you to try Ngrams out yourself, on any subject that interests you. The results are often fascinating.

Google Ngram of other heart attack related phrases