Retrofitting Material Design to Pre-Lollipop Android

lollipopAndroid 5.0 Lollipop comes with a complete makeover of the Android user interface. Called Material Design, the new UI replaces the old Holo Light and Dark themes used since Android 4.0. Continuing a trend that started with Microsoft and the flat tiles of their Modern UI, later adapted by Apple with iOS 7 and 8, Material Design flattens out the elements of the UI, though not completely. Instead the UI replicates the illusion of pieces of paper sliding over surfaces within the device, and does so by the subtle use of tinting and shadows. Material Design is more than this: it includes brightly colored elements, logical transitions and other features.  Google is attempting to develop precise guidelines for UI design that can extend from small phones, to desktop computer screens.  The philosophy of Material Design is laid out here. The full panoply of Material Design is only available on the just released Nexus 6 and Nexus 9 devices, though it should come to other devices eventually (soon?). Nevertheless, Google has provided a glimpse of Material Design over the last several months by releasing revamped versions of their standard apps like Gmail and Newsstand with Material Design themes.  Now it is finally possible for developers to create versions of their own apps using some but not all features of Material Design. Google now supplies the appcompat-v7:21 support library to accomplish this.

I decided to upgrade my app EP Coding to material design. The app is based on the master-detail template included with the Android SDK, and uses an ActionBar (the menu bar with icons and text at the top of the screen). If you are designing apps only for Android 5.0 and higher, or if you don’t care about using Material Design in older versions of Android, you can continue to use the ActionBar by just having your theme in the res/values-v21 directory derive from one of the new Material Design themes, such as Theme.Material.Light.DarkActionBar. However if you want to use a Material Design theme for pre-Lollipop Android, then you can’t use the built-in ActionBar. Instead you must use the new Toolbar widget that is included in the AppCompat library. The Google docs proclaim how much more flexible the Toolbar is compared with the old ActionBar. Unfortunately with flexibility comes extra work.

It is necessary to include the Toolbar in each layout in which you want to use it as an ActionBar. The master-detail template used in my app makes heavy use of fragments, so it was necessary to make some major changes to my layouts in order to use the Toolbar.

The first step is derive your app theme from a non-ActionBar Material Design theme. You don’t want your Toolbar colliding with the old ActionBar. Thus in themes.xml:

<?xml version="1.0" encoding="utf-8"?>
<resources>
 <!-- Application theme. -->
 <style name="AppTheme" parent="AppTheme.Base">
 </style>

 <style name="AppTheme.Base"
parent="Theme.AppCompat.Light.NoActionBar">
   <item name="colorPrimary">@color/primary</item>
   <item name="colorPrimary">@color/primary</item>
   <item name="colorPrimaryDark">@color/primary_dark</item>
   <item name="colorAccent">@android:color/black</item>
   <item name="android:windowNoTitle">true</item>
   <item name="windowActionBar">false</item>
 </style>
</resources>

Note that here we also define some colors for the ActionBar Toolbar. colorPrimary is the color of the Toolbar, and colorPrimaryDark is the color of the status bar above the Toolbar (the bar with notifications, the time, number of bars, etc.). It is actually only colored on Android 5.0, and remains black on pre-5.0 Android. colorAccent is used to highlight text and checked checkboxes and radio buttons (but see further below).

I defined the Toolbar in a separate file, and then included this in the layout files when needed. The Toolbar:

<?xml version="1.0" encoding="utf-8"?>
 <android.support.v7.widget.Toolbar
 xmlns:android="http://schemas.android.com/apk/res/android"
 xmlns:app="http://schemas.android.com/apk/res-auto"
 xmlns:tools="http://schemas.android.com/tools"
 android:id="@+id/toolbar"
 android:layout_width="match_parent"
 android:layout_height="wrap_content"
 android:background="?attr/colorPrimary"
 android:elevation="4dp"
 app:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar"
 app:popupTheme="@style/ThemeOverlay.AppCompat.Light"
 tools:ignore="UnusedAttribute" />

I wanted the equivalent of Holo.Light.DarkActionBar, so the toolbar uses the ThemeOverlay.AppCompat.Dark.ActionBar theme. The app:popupTheme style used here ensures the text and icons on the Toolbar are white instead of black. It seems redundant, but the android:background has to be defined as the colorPrimary here, despite it being defined in the theme. Finally I set an elevation for the Toolbar of 4 dp, which is what the Material Design guidelines suggest for ActionBars/Toolbars. This casts a small shadow below the Toolbar, giving it a 3D look. Unfortunately this attribute is only used in Android 5.0, and is ignored otherwise.

In the master-detail template in my app there are two main components: a list view of procedures which is used to select an individual procedure. The detail view shows the details of the billing codes for the procedure. On a phone the list appears full screen and is replaced by the detail screen when a procedure is selected. On a tablet, the list appears along the left border, and the details appear on the right side of the screen. The procedure list and procedure details are therefore not implemented as Activities, rather they are Fragments that can either be included in a layout alone as an Activity (for phones) or combined into two views in the same Activity (for tablets). The original procedure list fragment looked like this:

<?xml version="1.0" encoding="utf-8"?>
 <fragment
    android:id="@+id/procedure_list"
    android:name="org.epstudios.epcoding.ProcedureListFragment"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:layout_marginRight="16dip"
    android:layout_marginLeft="16dip"
    tools:context=".ProcedureListActivity"
    tools:layout="@android:layout/list_content">
 </fragment>

In order to use the Toolbar, it is necessary to turn this into a LinearLayout containing the Toolbar and the Fragment. The Activity displaying the list can then use the entire Activity (for a phone’s screen) or just the Fragment (for a tablet’s screen). Note that the Toolbar is included using the <include> tag.

<?xml version="1.0" encoding="utf-8"?>
 <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
 xmlns:tools="http://schemas.android.com/tools"
 android:layout_height="match_parent"
 android:layout_width="match_parent"
 android:orientation="vertical" >
 <include layout="@layout/toolbar" />
 <fragment
   android:id="@+id/procedure_list"
   android:name="org.epstudios.epcoding.ProcedureListFragment"
   android:layout_width="match_parent"
   android:layout_height="match_parent"
   android:layout_marginRight="16dip"
   android:layout_marginLeft="16dip"
   tools:context=".ProcedureListActivity"
  tools:layout="@android:layout/list_content">
 </fragment>
 </LinearLayout>

In order to use the toolbar, the Activity class has to extend ActionBarActivity (which is a subclass of FragmentActivity). You set the Toolbar as the ActionBar by code like the following:

protected void onCreate(Bundle savedInstanceState) {
 super.onCreate(savedInstanceState);
 setContentView(R.layout.activity_procedure_list);
 Toolbar toolbar = (Toolbar)findViewById(R.id.toolbar);
 setSupportActionBar(toolbar);
}

After this, the Toolbar will behave like the ActionBar.

There are some other nuances, however. I had trouble get my SearchView to work. This was corrected by referencing the support Searchview in my menu.xml file. Note also that you must define an app namespace, and use this namespace, not the android namespace for defining the actionViewClass and showAsAction.

<?xml version="1.0" encoding="utf-8"?>
 <menu xmlns:android="http://schemas.android.com/apk/res/android"
 xmlns:app="http://schemas.android.com/apk/res-auto">
<item
 android:id="@+id/search"
 app:actionViewClass="android.support.v7.widget.SearchView"
 android:icon="@drawable/ic_search_white_24dp"
 android:orderInCategory="10"
 app:showAsAction="ifRoom|collapseActionView"
 android:title="@string/search_title">
 </item>
 <item
 android:id="@+id/wizard"
 android:icon="@drawable/ic_directions_white_24dp"
 android:orderInCategory="90"
 app:showAsAction="ifRoom"
 android:title="@string/wizard_title">
 </item>
 ...
 </menu>

I also had trouble implementing my PreferencesActivity. Rather than go into detail, this post on Stackoverflow shows how it is done. You can also checkout the source code of my app at GitHub.

The old icons look clunky with Material Design. Android has released a complete set of standard icons, including ActionBar icons, that are very useful.

Finally not everything is perfect with backporting Material Design to “old” Android. I had one particular vexing problem with tinting of checkboxes. The AppCompat support library is supposed to use the defined colorAccent to color checkboxes when checked. However most of my checkboxes were being colored black instead of the light blue of my colorAccent. Worse, the checkboxes in my Preferences Activity were being colored blue as they were supposed to be. Worse still, when I went from the Preferences Activity back to one of my my detail screens, some of the checkboxes would be tinted blue and some remained black. For example:

device-2014-11-11-224026

It turns out the AppCompat tints the checkboxes after they are drawn, but only checkboxes that are in a layout. The checkboxes on my screen were all created via programming, so they were not tinted. But, when I went to the Preferences Activity, Android must behind the scenes save the layout of the previous screen. When that screen returns, AppCompat is able to tint the checkboxes, though the newly checked ones after that revert to black. (At least that’s my theory as to what’s happening.) Ugh! Hopefully an update will fix this issue. For now, I set my colorAccent to black to avoid having multicolored checkboxes in pre-5.0 Android, and set colorAccent to blue for Android 5.0.

Wrap-up

So the end result is something approaching Material Design on current Android devices (I’ve yet to see Lollipop up and running on a real device). It is not easy converting from the ActionBar motif to Toolbars. Android documentation could be better — a lot better! Many of the Android online documents still retain instructions pertinent to the old Holo themes which is confusing. There are a lot of subtleties and poorly documented techniques that make it difficult to realize Google’s goal of “Material Design Everywhere.” But at least I’ve got the transition done on one of my apps. Now on to the next!

Weekdays with Maury

The Maury Povich Show
The Maury Povich Show

When I was working I never watched daytime TV. Even now I don’t watch much TV, usually just some news shows. Nevertheless recently I had occasion to watch some daytime television and I happened on the Maury Povich Show. Disobeying my better instincts to change the channel, I spent some time watching it, and the similar show that followed, the Steve Wilkos Show. I found both shows disturbing but oddly fascinating, probably the same mixture of emotions that kept the Roman peasants coming back to the Circus Maximus to watch people being torn apart by lions. Here on TV I was watching peoples’ lives being torn apart in a horrifying if less bloody fashion. Though I was disgusted with myself for watching, it was hard to turn it off.

If you are not familiar with the concept of these shows, it is rather simple. There is a woman, her child, and a boyfriend or spouse. The paternity of the child is in question. Rather than attempt to answer this question in private with DNA testing, the involved parties come on TV where they tell their stories and usually end up yelling at each other, accompanied by whoops, cheers and jeers from the studio audience. The moderator, Maury or Steve (Jerry Springer I believe was the first to have this kind of show) asks questions and makes some token attempt to keep the “contestants” in line, though a few thrown chairs and bleeped-out cuss words are par for the course and add to the drama. Finally the truth is revealed, both by the results of lie detector tests that the warring parties have taken before the show, and ultimately by the DNA test. At that point the man who thinks he is the father of the child finds out either he really is, or that his wife cheated on him and the kid isn’t his. There are variants on this theme, such as the freeloader who doesn’t want to support a child but finds out it is his kid after all, or the black couple with the light-skinned baby who everyone knows can’t be from the husband but the husband refuses to believe it until the DNA results come in. After the denouement there is a lot of crying or screaming or both. And then on to the next story.

Is this exploitative? Is the Pope Catholic? I did not watch long enough to prove this, but from what I watched and what I have read these shows have on poorly educated, low-income, often black couples who behave in a way that reinforces all the negative stereotypes we have of poor uneducated people. I don’t know why they come on the show. Sure they get a free DNA test, but at the cost of exposing the most private secrets of their lives to the world. The fact that they are willing to do this makes their participation even sader and more pathetic. But worse by far are the people who created the show and decided to exploit them in this way.

Back in the 1950s when I was growing up there was a show called Queen for A Day. Women would come on the show and tell stories about how they had lost all their money or had a handicapped child and couldn’t afford medical treatment. Using an applause meter, the audience would vote on which story was most pathetic. The winner was dressed in a royal robe and received her prize, often something like a washer-dryer. An awful concept for a show, but no worse than what’s on daytime TV now.

After Maury’s show was over, the nearly identical Steve Wilkos Show came on.  A couple was introduced. Two young black people, who had a son 5 years old. I’m not sure if they were married, but I think they were and it’s easier to write their story as if they were.  In any case they had been together for over 5 years and the man considered the son his. But he wasn’t sure. The wife, at about the time the child was conceived, had gone to a party where she claimed she was drugged and raped. She said she awoke in the hospital the next day not remembering anything and was told she had cocaine and Ecstacy in her blood. She did not press charges. The husband was concerned that the child was not really his. He said he loved his son but really wanted to know if he was his and whether his wife was telling the truth. As usual on the show, the two spent some time hurling accusations back and forth to the delight of  the studio audience, and it looked like there wasn’t going to be much holding this relationship together if the DNA test didn’t come back the right way.

After a dramatic build-up (and several commercials), the results of the testing were delivered in a sealed envelope. The lie detector results were first. The woman had been telling the truth when she said she had awakened in the hospital with positive drug tests the morning after the party. But she had lied about being involuntarily drugged and raped. She had taken the drugs voluntarily and had “hooked up” with a guy at the party voluntarily. As the woman started breaking down on being confronted with these facts, the host, Steve Wilkos, read the results of the DNA testing. The man who had been with this woman for over 5 years and had served as a father to her son, was indeed NOT (emphasis per the show) the biological father of the child.

This sent the wife running in tears backstage. But the husband, who had every reason to be disappointed and angry, did the unexpected thing. He ran after his spouse, followed by the cameras. He hugged his wife and comforted her, repeating “we’ll work it out. We’ll work it out.” And all the time the camera focussed in on this most private, human, touching moment.

I turned off the TV, feeling guilty at witnessing such a private moment, but at the same time uplifted by the capacity of humans to forgive, to love. I guess this is the essence and attraction of reality television. While exposing a lot of the bad side of humanity, it occasionally surprises us by showing us the good lying at the core of some people. But it’s strong stuff, even gut-wrenching, and fundamentally voyeuristic. Not my cup of tea.

Can Writing About Medicine Change Medicine?

I’m getting to the point where I think it might be time to stop or at least decelerate the pace of my writing on medicine. When I retired from medical practice almost a year ago there were a lot of pent up experiences that I felt a need write about. But now I have already written about almost everything that I wanted to and, as I am no longer a practicing physician, I lack the ongoing experiences and frustrations of day-to-day medical practice to replenish the store. Moreover I am having a growing sense of futility when writing about medicine.  Can writing about medicine change medicine for the better?  Is anyone listening to physicians’ voices?  Or are we all just grumbling to each other?

How many posts bashing electronic health record (EHR) systems does one need to read (or write)? I’m certainly not the only one writing on this topic.  Criticism of EHR systems is very popular amongst physician bloggers nowadays.  I hope someday the shear quantity (and quality) of these posts reaches a critical mass that results in the EHR companies paying attention and making some changes to their products — but I’m not holding my breath. Similarly a large number of physicians rail against the current Maintenance of Certification (MOC) process, yet I see no indications that anyone who can change MOC is listening. The negative effect of the Great Hospital Buy-Out of Physicians of the last decade is also a favorite topic, as are increasing regulations, the hegemony of insurance companies, and countless other annoyances, but what is the use of grousing about all this if no one is listening but our fellow physicians?

Despite voicing our concerns online, we physicians don’t seem to have a voice where it counts — politically.  We don’t have effective representation. Societies that are supposed to represent physicians such as the Heart Rhythm Society, the American Heart Association or the American Medical Association are beholden to groups other than physicians, i.e. drug and device companies — the same drugs and devices that they publish supposedly objective guidelines about. These medical societies also are in bed with the American Board of Internal Medicine, the progenitors of endless board recertification and MOC, and indeed have a nice side-business going on providing expensive board-review courses to prepare for these tests.  Corporate funding of these societies ranges from 20 to 50% of their total revenue (see here, here and here). Go to any big national meeting of these societies and wander through the acres of exhibits. Some of the exhibit booths are bigger than the home in Philadelphia that I spent my first years in. These glittery exhibit halls reek of money. Every time the pharmaceutical companies complain that they have to charge so much for their drugs because of the cost of R & D, I  recall these lavish exhibits as well as the constant TV commercials for erectile dysfunction products and drugs for quasi-diseases like short eyelashes.  It makes me sick! (Ah! New syndrome: TV drug commercialosis!) Money is power in politics, and physicians, despite being perceived as rich and even overpaid by the general public, are low down on the money totem pole compared to other facets of the health care system.

I’m not trying to be pessimistic, just realistic.  Yet if writing is the only weapon we have, what choice do we have but to continue to use it, blunt instrument though it may be?  The growing multitude of physician-bloggers and physician-commenters will continue to write, will continue to fight for changes in EHR systems, recertification requirements, and health care policies. Maybe we’ll get lucky and someone holding the purse strings will be swayed and do something to make the lives of physicians better. Possibly the decision-makers will come to realize that if our lives are better, our patients’ lives will be too. That’s important because everyone, whether politician, hospital administrator or EHR corporation CEO, will sooner or later be a patient in need of a good doctor.

Whatever Happened to Netiquette?

Anita Sarkeesian
Anita Sarkeesian

Let’s harken back to the early days of the Internet, say the 1990s. In those days of yore, characterized by limited bandwidth and lack of flash animations, people by trial and error attempted to work out the dos and don’ts of online communication. This was before Facebook messaging and tweeting, before SMS and MMS. Communication was via email, or Usenet, or IRC (you may have to look up the last two, but they still exist). Even in those days it was quickly recognized that communicating electronically was not the same as communicating face-to-face, or even via telephone. The impersonal nature of online communication tends to insulate those communicating from the emotional feedback that occurs naturally during face-to-face communication. We don’t see the anger, or embarrassment, or sadness in the faces or in the voices of those with whom we are communicating. Talking with someone face-to-face, we can see how our words are affecting them. We might change the course of the conversation when we see that our words are making someone angry, or sad, based on concern that we might end up with a bloody nose, or because we hate to see someone upset. With digital communication, especially the anonymous sort, we don’t have these checks and balances, so the sky’s the limit as to how much hateful speech we can spew out without regard to consequences.

In response to this, rules of Netiquette were developed. Today these rules sound quaint, much like Emily Post’s rules of etiquette (do you remember to always leave a calling card after dining at a lady’s house?). Rules like: don’t make a big deal of spelling mistakes, or don’t post in all capital letters, or avoid off-topic posting. If only these were the worst of the problems we face when surfing the Internet today!

Online communication, if you still want to dignify the process with that term, has changed for the worse, with no end in sight. If you feel it’s always been this bad, I disagree.  It is getting worse.  Online rudeness has even spilled over into everyday life. How many times a day do you see some self-important jackass (see it’s affected even me) sitting in a public place (like an airport terminal) holding a loud conversation on his (I won’t neutralize the pronoun here, it’s usually a man) cell phone over his bluetooth headset? I remember the very first time I saw this happen, years ago in a train station. I was convinced the person was schizophrenic and talking to imaginary friends.

There is no civility online anymore. Accounts are hacked and private photos are leaked. Men post naked photos of ex-girlfriends online, where they circulate forever between Tumblr sites (links withheld intentionally). Poor Anita Sarkeesian, whose “crime” was that she produced a set of YouTube videos detailing the very stereotypical way women are depicted in many video games (as if that should be a shock to anyone) is the constant victim of rape and death threats (NSFW link). Nothing proves her thesis more than the response to her videos. And if you want to see for yourself how wild it is out there (on the Internet), go ahead and tweet something even mildly controversial, such as something about gun control, or Islam, or the depiction of women in games. Then sit back and wait for the barrage of ad-hominem attacks. Sure, with a 140 character limit in Twitter, it’s probably easier to launch an ad-hominem attack than to have a rational discussion. But there’s more going on here than just too limited space for a rebuttal. Trashing people online has become a sport that is increasing in popularity. And that’s sad.

I just hope that if there is ever an alien race investigating our world to see if we are worthy of joining the inter-galactic community they don’t base their judgment on reading the comments section to the Fox News website, or the Twitter posts with the #GamerGate hashtag. If they do, we’re in big trouble.

Lost in EPIC Land

One of the many unanswered questions about the handling of the first Ebola case in the United States is the role of the Electronic Health Record (EHR).  Initial reports put at least some of the blame for the patient’s being sent home from the hospital despite a high risk travel history on a failure of communication between the triage nurse and the emergency room doctor, aided and abetted by the EHR system.  Very quickly this story was altered.  On October 2 Texas Health officials were blaming the EHR, stating that “[a]s designed, the travel history would not automatically appear in the physician’s standard workflow.”  The next day, the same officials changed their tune, stating “[t]here was no flaw in the EHR in the way the physician and nursing portions interacted related to this event.” Texas Health Presbyterian Hospital in Dallas uses the EPIC EHR system.  Texas Health officials and EPIC deny that the reversal was related to any “gag order” in the hospital contract with EPIC.  It is not clear (to me at least) if these statements imply that there actually is no gag order in the contract, or the gag order is there but was not a factor in the changed story.  It should be noted that such gag orders are apparently common in contracts with EHR systems.  It should also be noted that EHR systems are very powerful companies.  EPIC’s CEO’s net worth in 2012 was estimated by Forbes to be $1.7 billion.  EPIC has benefited immensely from government largesse in the form of the taxpayer subsidies and mandates requiring physicians and hospitals to purchase EHR systems or risk losing Medicare dollars.  Politicians (especially Democrats) have also benefited from EPIC, with hundreds of thousands of dollars donated to political campaigns.  EPIC is in the running for a huge government contract to provide EHR services to the Department of Defense.  They certainly wouldn’t want the Texas Ebola snafu to sidetrack this.

Could the EHR have played a role in the confusion in the Dallas emergency room the day Thomas Eric Duncan was sent home with some oral antibiotics?  Perhaps we could understand better how communication failures between nurses and doctors using the EPIC EHR might arise if we could look at relevant screenshots.  When a nurse enters a travel history into EPIC, what then appears on the doctor’s screen?  How easy is it to see?  How easy could it be to miss?  Where does it appear on the screen?  How big is the font?  Does it even show up on the screens the emergency room doctor is usually looking at?

One can argue that it shouldn’t matter.  The nurse should have verbally communicated with the doctor the travel history, or the doctor should have taken his own travel history.  This is all true, but remember, EHR systems were supposed to make medicine better.  They were supposed to make sure everything was documented and nothing would fall between the cracks.  So it would be useful to see some screenshots to understand why something entered by the nurse into EPIC was not seen by the doctor.

Don’t hold your breath waiting for the screenshots.  Whether or not EPIC has gag orders in their contracts, they definitely do not allow posting of screenshots.  I found this out personally when I tried to post some EPIC screenshots in the past.  EPIC has a group of people whose job is to be on the lookout for EPIC screenshots on the internet.  When they find them, they contact the offending party and demand their removal.  I had prepared a screenshot with annotations to show how confusing the EPIC user interface is, and how easily one simple fact (Travel History: recent travel to Liberia) could be lost in the morass of toolbars, sidebars, tabs, and menus that is the EPIC user interface, but I can’t chance having the EPIC SWAT Team descend on my house.  So I have attached a blurred redacted screenshot.  If a news agency wants to take on EPIC, I would be happy to provide an unblurred screenshot.  I’m not willing to take the chance, but somebody should.

 

The EPIC UI, hopefully blurred sufficiently for the EPIC Screenshot Police (click to enlarge)
The EPIC UI, hopefully blurred sufficiently for the EPIC Screenshot Police (click to enlarge)

How Much Money Do Academic Experts Get From Drug and Device Companies?

Screen Shot 2014-10-09 at 9.50.04 AMNow that Open Payments data is available to the public I decided to do some snooping around.  It’s not hard to do.  I was curious as to how much drug and device company money academic experts receive.  As a cardiologist specializing in electrophysiology I have been to many national meetings, and it is always the same people year after year who chair the sessions, are on the policy committees, and write the guidelines.  If you are an electrophysiologist you know whom I am talking about.  I suppose every specialty has its own cadre of experts: the 1% who set the agenda for the rest of the us.  The big names in our respective fields.

So I picked 3 names at random and downloaded their Open Payments data.  Keep in mind that there are only 6 months of payment data available, and a third or more of the data has been withheld including most of the research payments.  I only included data from the general payments database and excluded the research payments.  I just picked the first 3 names that popped into my head, and won’t identify who these doctors are.  My intent isn’t to embarrass anyone.  They are all well known and meet the criteria for being an expert given above.

Expert A had 91 payments made over 6 months totaling $58,101.  Most of the payments were from Medtronic and Boston Scientific.  The majority of payments were listed under the categories of Food and Beverage or Travel and Lodging, but the larger payments were for Consulting Fees or speakers fees.  The largest individual payment though was for travel, at just over $6000 from Medtronic.

Expert B had fewer payments (34) but a larger total.  Over 6 months this expert was paid $112,115.  The majority of payments were by Medtronic, with individual payments as high as $24,500.  The description for one of these large payments was “Compensation for services other than consulting, including serving as faculty or as a speaker at a venue other than a continuing education program.”

Screen Shot 2014-10-09 at 9.45.58 AM
A sample from Expert B

Expert C had absolutely no entries in the database.  Zero.  Good for him!  Or should we wait until the full dataset is released before coming to conclusions?

In this extremely unscientific sampling of 3 experts, compensation from drug and device companies ranged from zero to 6 digits in 6 months.  Certainly one shouldn’t draw any firm conclusions from this.  Nevertheless, the fact that money changes hands between drug and device companies and the experts who help write guidelines and lecture about these drugs and devices is concerning.  Actual dollar amounts seem more stark and disconcerting than bland statements like Doctor X serves as a consultant for Company Y. Perhaps the Open Payments dollar amounts should be added to the disclosure slides that are shown at national meetings.  A more thorough look at this data is warranted.

How to View Your Doctor’s Drug Company Payments

The CMS Open Payments database is up for the public to view, but the site is difficult to navigate.  Here is a step by step guide to using the site.

  1. Go to openpaymentsdata.cms.gov
  2. From the list of databases, click on the General Payment Data with Identifying Recipient Information – Detailed Dataset 2013 Reporting Year (or skip step one and just click on the link) as in the image.

    Click on the highlighted database link.
    Click on the highlighted database link.
  3. The page opens with a spreadsheet and a number of filter conditions.  Write in how you want to filter the spreadsheet (e.g. filter for last name and city/state).  You can add filters, such as the first name on the page too (see Add A New Filter Condition in lower right corner).

    Filter the spreadsheet data.
    Filter the spreadsheet data.
  4. Horizontally scroll the spreadsheet to see the columns you are interested in.  There are a lot of blank or useless columns.  You can remove unwanted columns by clicking on the Manage button (brown button with a gear).

    The money column.  You can exclude columns using the Manage button.
    The money column. You can exclude columns using the Manage button.
  5. Make sure you are looking at just one physician’s data.  In the figure below I have horizontally scrolled to the list of physician names.  Note that there is a Physician_Profile_ID with a unique ID number for each physician.  Pick out the physician you want to look at and add a filter for that ID number.  Remove other filters.  This should give you the data you want on an individual physician.

    Screen Shot 2014-10-08 at 12.05.46 PM
    Physician names and ID numbers. Filter on a specific ID number to get all that physician’s data.

And that’s it.  I haven’t explored the other databases.  I haven’t found any tool (other than a calculator) to sum the dollar amounts.   [ADDENDUM: Use the Export button to export the data in a form compatible with your spreadsheet.  Options include Excel and CSV formats.] This information should be enough to get you started with the Open Payments database.

 

Ebola – Missing the Diagnosis

The first “wild” Ebola case in the United States has occurred in Dallas, Texas. The patient, who is from Liberia and had contact with a pregnant Ebola victim in his native country, was initially sent away from the Emergency Department (ED) of a Dallas hospital after reporting there with viral symptoms. He told the triage nurse that he had just arrived from Liberia, but despite this was sent home. How could this happen?

The media are reporting that there was a failure of communication between the nurses and the doctors but in truth we don’t know exactly what happened. Having worked in hospitals with busy EDs (though not as an emergency doctor) I can identify some factors that might have been in play.

  •  Common things occur commonly, rare things occur rarely. Don’t look for zebras. We are taught this in medical school. If it looks like a horse and acts like a horse, most of the time it is a horse, and not a zebra.  The principal as it applies in this case would be: even with this patient’s travel history, statistically speaking the most likely diagnosis is still the common cold. But this saying can lull doctors and nurses into complacency.  This maxim does not deny the possibility that a zebra will come into the ED, and medical personnel should always keep the possibility that they are dealing with a zebra in the back of their minds.
  • ED Overload. EDs are still the primary care entry point for the majority of patients in this country.  We lack easily accessible primary care facilities.  EDs are the go-to place for all kinds of acute illnesses, even minor ones. The result is that EDs are overloaded with patients with illnesses that would be better treated in other settings. I remember days when ED patients were stacked in the hallways on stretchers. The ED docs were running around harried and hassled without any time to even think about what they were doing. Such stress is not conducive to good communications and good diagnostic skills. People get sent home that shouldn’t be sent home, and people get admitted that shouldn’t get admitted.
  • Hospital Romper Room. Rather than giving in-services on detecting Ebola virus, hospitals torture medical professionals with hours and hours of elementary school level computer-based “education.” Courses on identifying different types of electrical sockets, what code pink vs code yellow means, compliance training, module after stupid module. And because no one can remember what a red electrical socket is for more than a year, the identical training and testing are repeated every single year. All this “education” and yet not a single in-service on Ebola or other threats. Hospitals aren’t interested in education. They only want to fulfill the requirements that have been imposed on them by regulatory agencies in order to maintain certification.
  • General knowledge. Doctors and nurses are well-educated, but may be suffering from the general lack of knowledge that afflicts the overall US population. For example, in the McCormick Tribune Freedom Museum poll only 8% of Americans polled could name at least 3 First Amendment freedoms, whereas 40% could name 2 of the 3 judges on American Idol  For a depressing litany of things Americans don’t know, see this.   Or watch episodes of Jaywalking with Jay Leno.  As about half of Americans don’t know where New York is, how many have even heard of Liberia?  It’s possible that no connection was made between a febrile patient from Liberia and Ebola due to lack of knowledge of geography and current events. Remember Sarah Palin (among others) didn’t even know Africa was a continent.

I’m not trying to make excuses. Sending this patient home was a horrible mistake. But in the US Health Care system we reap what we sow, and we are making a mess of it. It will only get worse as the medical field becomes less and less attractive to our best and brightest.

AutoLayout Revisited

My initial experiences with Apple’s iOS AutoLayout were pretty negative. Using Interface Builder’s (IB) ability to generate AutoLayout constraints automatically based on the positioning of views turned out to be frustrating, as it would generate constraints that were incompatible with iOS 7. As iOS 8 has only been out for a few weeks, I definitely want to keep supporting iOS 7 in my app. But Xcode 6 generates these incompatible constraints anyway, even though the deployment target is iOS 7. Furthermore the automatically generated constraints don’t really do what I want, such as keeping views centered on the screen when the screen enlarges from iPhone size to iPad size. So I was forced to go back to the drawing board and really try to understand how AutoLayout works.

My impetus for all this was my desire to upgrade one of my apps from an iPhone only app to a Universal app — optimized for display on both the iPhone and iPad. The app (EP Mobile) has a big storyboard and many different views. By using AutoLayout I hoped to avoid having two different storyboards, one for iPhone and another for iPad, and just use one storyboard for both devices. I decided to check the Use Size Classes option in IB. Supposedly this allows for designing separate layouts in a single storyboard for different size devices. As it turns out, this was not helpful, as apparently this feature only works on pure iOS 8 apps. Moreover, as the compiler seems to generate code for each possible device, building your app after making a change in the storyboard takes much longer than it did before enabling this option. After a while I grew tired of this and decided to turn off Size Classes. However trying to do this resulted in a warning dialog from Xcode that stated a lot of nasty things that might happen (I hate dialogs that use the word “irreversible”) and so I decided to just live with the longer build times.

I watched some YouTube videos on AutoLayout that were helpful (here and here), but in truth the best way to learn AutoLayout is to play around with it. Take a view, clear any constraints that are there, put the subviews where you want them, and then add your own constraints manually. While doing this, ignore warning messages from Xcode about ambiguous constraints and misplaced views. Ignore the yellow and red lines that show up on the screen indicating these errors. Until you have completely specified all the constraints needed to  determine unambiguously the location of the subviews without conflicts, these warnings will show up. Prematurely asking IB to Update Frames before all the constraints are specified will make the subviews jump around or disappear. Unfortunately even when all constraints are specified and correct, the yellow warnings don’t go away. IB is not capable of automatically applying your constraints and misplaces your controls in your views whenever you change constraints. Sometimes it misplaces controls even when you are just changing the storyboard metrics from one size to another. Update All Frames then puts everything where it belongs.

One way to start is by putting constraints on heights and widths of controls that you don’t want to resize when the screen size changes or the device rotates. Note that some controls, such as buttons, have an intrinsic size based on the button label, and it is not always necessary to add specific constraints to these controls. However, it looks to me like the system will ignore the intrinsic size at times, especially if you are trying to do something fancy with constraints, and your button will grow to a ridiculous size to satisfy your constraints. So it doesn’t hurt to specify width and height constraints manually even in these controls. Of course if you want controls to expand in one direction or another, don’t specify a constraint in that direction.

Next step is to align controls that are lined up horizontally. You can select multiple controls and then align their vertical centers using Editor | Align | Vertical Centers on the menu. If there are rows of controls like this you can take the leftmost control and working from the top to bottom, pin each row to the row above (or to the superview for the first row) to make sure there is vertical separation between the rows. Finally, usually you want your controls to be centered on the screen, even when using different screen sizes and with rotation. If you have one wide control, such as a segmented control or a large text field, you can horizontally center that control. You can then pin the leading edge of that control to the leading edge of the superview (i.e. the window) and that view will grow as the screen width increases. Aligning the leading edges and trailing edges of the other controls to this view will allow the whole set of controls and expand and contract with the width of the screen. If you have rows of controls, you may still need to put constraints between individual controls to control the horizontal distance between them.

One issue I noticed was that, while it’s nice to have controls expand to fill the screen of the iPhone when going from the small iPhone 4s to the 5.5 inch iPhone 6, sometimes the controls get too wide when viewed in landscape mode or on the iPad if they are just pinned to the superview leading edge.  For example, this segmented control is centered horizontally and vertically in the superview, and the leading edge is pinned to the superview leading edge.

Screen Shot 2014-10-01 at 10.03.52 AM
Segmented control has centering constraints and is pinned to the leading edge of the superview.

On the iPhone 4s, with rotation the view remains centered and enlarges when the device is rotated.

iPhone 4s portrait
iPhone 4s portrait
iPhone 4s landscape.  Control remains centered and expands to fill screen.
iPhone 4s landscape. Control remains centered and expands to fill screen.

To show the flexibility of AutoLayout, we can limit the expansion of the segmented control to a maximum we select, by making the width less than or equal to a value (in this case 350) and lowering the priority of the pinning of the leading edge to the superview. This achieves the desired effect.

Width of control constrained to ≤ 350 and priority of pinning to left margin decreased to 750
Portrait view is unchanged, but landscape view (shown here) limits the width of the control to a max of 350
Portrait view is unchanged, but landscape view (shown here) limits the width of the control to a max of 350

You can do a lot with AutoLayout just using IB if you are patient and try out various effects. You can do more by attaching your constraints to outlets and manipulating them in code. It is unfortunate that some glitches in the implementation of AutoLayout in Xcode 6 Interface Builder make using AutoLayout more frustrating than it needs to be.  To those who are discouraged like I was by AutoLayout, I urge you to keep experimenting with it.  The a-ha moment will come and it will be worth it.

How Secure is Your Medical Data?

Script showing my Mac is vulnerable to ShellShock.
Script showing my Mac is vulnerable to ShellShock.

With the recent discovery of the ShellShock vulnerability affecting a large number of computers, the question comes up again: how secure is medical data? Thanks to the federally mandated push to transfer medical data from paper charts to computer databases, most if not all of this data is now fertile ground for hackers. As pointed out in this article medical data is more valuable to hackers than stolen credit cards. The stolen data is used to create fake IDs to purchase drugs or medical equipment, or to file made-up insurance claims. Hackers want our medical data and hackers usually find a way to get what they want.

In going from paper to silicon, we have traded one set of disadvantages for another. Paper charts are bulky, require storage, can get lost or destroyed, are not always immediately available, can be difficult to decipher, and so on. Electronic Heath Records (EHR) systems were intended to avoid these disadvantages and to a large part do; however, we have traded the physical security of the paper chart, which can be locked up, for the insecurity of having our medical data exposed to open ports on the Internet. And make no mistake, the Internet is a wild and scary place. My own website, certainly not containing anything worth much to hackers, is subject to multiple daily bruteforce password guessing attacks to login. Fortunately I have security software in place, but despite this the site was successfully hacked in the past from Russia. There is no doubt that more important sites than mine are subject to more intense attacks. Millions of credit cards have been stolen in attacks on Target and Home Depot. Celebrity nude photos have been stolen from “secure” sites. And if you are not worried about hackers getting your medical data, thanks to Edward Snowden’s revelations you can be sure that it is freely available to the NSA.

But certainly, you ask, given the sensitivity of the data, EHRs must be amongst the most secure of all computer systems? Well it’s difficult to answer that question. Most EHR systems use proprietary software, so the only people examining the source code for bugs are the people that work for the EHR company. It is unlikely that any bugs found would be publicized; rather they would be silently fixed. As critical as some people have been about the existence of bugs in open source software, such as the HeartBleed and ShellShock bugs, at least there is a potential for such bugs to be found by outside code reviewers. There is no such oversight over the code of the EHR purveyors.

Even if one for the sake of argument assumes that EHR systems are secure from online hacking, they are still very vulnerable to what is known as “hacking by social engineering” or “social hacking.” Social hacking involves the weakest link of all security systems, the computer users: doctors, nurses, medical assistants, unit secretaries and others. People who use easy to guess passwords like “123456” or who tape the password to the bottom of the keyboard. People who get a call from someone pretending to be from IT asking for the user’s ID and password in order to fix some supposed problem. There are a large number of cons that rely on human gullibility that can be used to break into “secure systems.”

Besides these issues, I observed a great deal of laziness in regard to security when working in the hospital. Doctors would often log into the EHR system, review patient data, and then leave the computer to visit the patient room without logging out of the system. Anyone could sit down at that computer and view confidential patient information. Some of the systems would automatically log off after a few minutes, but even so there was plenty of time for a dedicated snoop to get into the system. And the problem can occur in doctor’s offices too, now that many exam rooms have a built-in computer. Just yesterday at my eye doctor’s office I was left alone in the exam room for about 15 minutes while my eyes were dilating. Sitting next to me was a desktop computer running Windows 7, left with the user logged on. This doctor’s entire network lay vulnerable. How easy would it be to read patient files, or copy a rootkit or a virus onto the system using a USB drive? Real easy.

Bug-free and 100% secure software probably is a pipe-dream that can’t be achieved in the real-world. In addition, hospitals, with hundreds of computer terminals everywhere, some still running such outdated and vulnerable operating systems as Windows XP, and with busy, security-unconscious users like doctors and nurses, are a security disaster waiting to happen. Now that we have put all our medical data metaphorically into one basket, I am convinced it is only a matter of time before there is a massive data breach that will make the Target credit card breach seem trivial by comparison. Better training of medical personnel who use EHRs may help prevent this, and this should doubtless be done. But we will never have the level of security again that existed in the era of paper charts.