Relic from Computer History

The M
The M

Sitting on my mantle is a bronze letter M. This M has been in my family as long as I can remember. When I was growing up I didn’t think about where it had come from. I knew it stood for our family name of Mann. Later on I learned the story of the M from my parents.  As it turns out, this particular bronze M is a relic from a bygone era of computer history.

I grew up in the 1950s just outside of Philadelphia, a block north of the city limits. This was an Irish-Catholic neighborhood. Our neighbors all had 9 or 10 kids. Dads worked and moms stayed home. It was a fun time and place to grow up as there were kids to play with everywhere.

Our neighbors to the right of our house were the Williams (we always referred to them as the Williamses). The father worked in construction. He was the one who gave my father the M. The M came from a building that his company was demolishing. For many years that’s all I knew about the M.

Eckert-Mauchly building
Eckert-Mauchly building

When I was older I asked my parents for more details about the origin of the M. The M came from the lettering over the entrance to the Eckert-Mauchly Computer Corporation building, which stood at 3747 Ridge Avenue in Philadelphia in the early 1950s. I have only been able to find one picture of this building. It is low resolution and the lettering is not clear, but certainly the M in my possession looks similar to the M of Mauchly on the building.

During and after the Second World War there was a massive stimulus to science and technology. In England Alan Turing and colleagues developed the “Colossus” computer at Bletchley Park that was used to decode German transmissions encrypted with the Enigma machine. There is little doubt that the intelligence gathered through this effort was instrumental in the Allies’ winning the war.  Sadly, Turing’s reward was prosecution and persecution for his homosexuality that led to suicide with a cyanide-laced apple — one of the most ignominious events in the history of humanity.

Mauchly, Eckert, and UNIVAC
Mauchly, Eckert, and UNIVAC

In America, at the end of the war, John Mauchly and Prosper Eckert joined forces at the Moore School of Engineering at the University of Pennsylvania to develop the ENIAC computer. Mauchly was what today we would call a “software” guy, and Ecklert was the “hardware” guy. Their computer was as big as a house and contained thousands of vacuum tubes.  It worked, though of course its processing power was infinitesimal compared with what we carry around in our pockets nowadays.  After doing computing work for the Army at Penn, Mauchly and Eckert decided to form their own company.   This decision was due to an issue still familiar today: dispute over intellectual property rights with the university. In 1946 they formed the first commercial computer corporation. Originally called The Electronic Controls Corporation, the name was changed to Eckert-Mauchly Computer Corporation (EMCC) in 1948. The company developed several computers that were sold mostly to government agencies such as the Census Bureau.   Of these computers the most famous was UNIVAC. UNIVAC was used to predict (successfully) the presidential election results on TV in 1952. Although we take this use of computers for granted now, at the time this was an amazing feat.  Grace Hopper, the computer pioneer who only recently has been getting the recognition she deserves worked at the EMCC. She went on to develop the first computer language compiler.  Unfortunately the EMCC lost government funding due to suspicions that they had hired “communist-leaning” engineers (this was the McCarthy era), and the company was taken over in 1950 by the Remington Rand corporation, which at the time made typewriters.  Eckert stayed on at Remington Rand (later Sperry, now Unisys), while Mauchly became a consultant.  You can see both of them in all their glorious 1950s nerdiness in this YouTube video.

Marker at the site of EMCC
Marker at the site of EMCC

At some point in the early 1950s the original building was demolished. I have been unable to determine the exact year. And from that building, as far as I know, only the M sitting on my mantle remains.

Reacting to Terrorism in Nice

Promenade des Anglais, Nice, France
Promenade des Anglais, Nice, France

Every other year Cardiostim, a major international convention for cardiac electrophysiologists, is held in Nice, France. Starting in 2000, and up until I retired, I made it a point to attend this meeting. The sessions were fun, but more fun was the chance to get away from it all and enjoy the sunny ambiance of the French Riviera. Knowing Nice quite well, it was especially horrifying to see the images on television last night of murder and mayhem. A man drove a large truck through a crowd along the Promenade des Anglais, mowing down dozens of people who had just finished watching a fireworks display celebrating Bastille Day, France’s equivalent of our Independence Day. All the details aren’t in yet, but sadly we have all become so familiar with this type of atrocity that there’s little doubt what investigators will find. A Muslim, heeding the exhortations of ISIS or al-Qaeda or some other jihadist group, decided to martyr himself in the cause of killing the “unbelievers” in as gruesome and dramatic way as possible. Perhaps the worst part of this is the palpable sense of frustration that most people (I included) feel. Since September 11, 2001, when the “War on Terror” was declared, things only seem to have gotten worse, with more and more terrorist attacks happening closer and closer to home. How can our leaders have so bungled things? What can be done to stop the insanity?

I grew up in the industrial Northeast of the United States, so predictably I am a progressive on most issues. I don’t like the evangelical social agenda and trickle-down economics of the right wing in this country. But I am exasperated with our left wing’s political correctness that refuses to acknowledge that religious doctrine is the main problem here. I’m sure if you asked the truck driver why he did it, he would answer it was his religious beliefs. For Hilary Clinton or President Obama to say that this is not the “true” Islam begs the question: who defines what is the “true” Islam? Presumably neither one of them is a Muslim, so neither one actually believes that any strain of Islam is true. If it’s all imaginary, what makes one imaginary belief more true than another? The main problem is the tendency to magical thinking in the first place, the innate gullibility of humans to accept outrageous ideas without adequate proof (a good definition of “faith”), in other words, religion. We underestimate religion as a destructive force. It has brought down the world before. The classical world of Greece and Roman was brought to its knees by Christianity. The subsequent period of religious dominance is aptly named “The Dark Ages.” And now, in the Age of Technology, with our smart phones and space probes orbiting the planet Jupiter, we again face a return to barbarism inflicted on us by the latest iteration of belief in that miserable vindictive God of Abraham.

The human race needs to grow up fast and shed its irrational religious crutches, or we are just going to continue to have our hearts broken again and again.

Stranger in a Strange Land

Inside Noah's Ark (photo from AP)
Inside Noah’s Ark (photo from AP)

Reading about the opening of the Noah’s Ark Theme Park in Kentucky brings to mind the days when I worked as a physician in that state. I had moved from an academic position in Colorado and joined a large group of private practice cardiologists in Louisville. I found that people in Kentucky were different from those in Colorado. They were much more overtly religious.

As an interventional electrophysiologist I would meet with each patient’s family before and after every procedure. Not infrequently one of the group sitting in the waiting room was introduced as “this is our pastor.” Usually at some point the pastor would suggest a round of prayer, and I was expected to participate, at least by bowing my head and maintaining a respectful silence. If the prayer was before the procedure the main focus was usually to make sure God guided my hand and the outcome would be good. Prayers after the procedure usually focused on thanking God for safely getting the patient through the procedure and asking for a speedy recovery.

It was not a good time to bring up the fact that I was an atheist. So I just went along with it, only briefly and mildly discomforted. Religion gives strength and comfort to people in the life and death situations that doctors often deal with. I rationalized that my silent participation was helping my patient and the family psychologically. Besides, how would they feel about my performing complicated heart procedures on their loved one if they thought I was an unbelieving heathen incapable of accepting God’s guiding hand?

It’s uncomfortable to be an atheist and a doctor, just as it uncomfortable in America to be an atheist in general. Polls show that the public distrust atheists to about the same degree they distrust Muslims. Being an atheist is practically taboo for someone running for public office. George W Bush, Sr. famously said “… I don’t think that atheists should be regarded as citizens, nor should they be regarded as patriotic. This is one nation under God.”   Atheists are considered immoral by religious people. They point to the atrocities committed by Stalin, Mao, or Hitler. Atheists in turn point out the Crusades, the Inquisition, the burning of witches, or, more recently, the atrocities of al-Qaeda and ISIS. Neither the religious or non-religious have a monopoly on morality.

As social consciousness is raised about oppressed groups such as the LGBT community, there has been little progress in the acceptance of atheists in American society (I mention America because the situation is quite different in Europe). And yet the non-religious are a fast growing group. In 2014, 22.8% of Americans did not identify with a religion.  Although a relatively small percentage of these people call themselves as atheists, probably because of the negative connotations of that term, this overall percentage is larger than the percentage of Catholics, Mormons, Jews, or Muslims.  It is amazing how unrepresented this large group is in our government! If one looks at scientists, (2009 Pew poll ), only 33% profess belief in God, vs 83% in the general public.  There is some evidence that the top, elite scientists are even less likely to believe in God (only 7%).  But do doctors hold beliefs similar to scientists? An older poll from 2005 showed that 77% believe in God, slightly fewer than the general population, but far more than scientists.  Nevertheless there are undoubtedly many doctors who do not share the religious faith of their patients.

To the religious patients who read this and feel they wouldn’t want a non-religious doctor:  I can assure you that I am a good person, with a sense of morals rooted in our common humanity. Not believing in an afterlife just makes me want to focus more on improving the quality of this earthly life, the only life I believe we have. I would only ask you not to assume that your doctor holds the same religious beliefs as you or that your doctor wants to participate in group prayer with you and your family.

To the non-religious doctors who read this I ask: how do you deal with your atheism in your practice? Are you, like I was, basically mum about it? Would your patients distrust you if they knew? Would they find another doctor? Is it better to pretend to be religious, just as pretending that a placebo is a real drug can be beneficial? In many parts of the country this question comes up rarely or not at all (I never faced it in Colorado), but in Kentucky, the state of Ken Hamm and Kim Davis, as well as throughout the Baptist South, I assure you that this is an issue you will face.

Back when the Creation Museum opened in Petersburg, Kentucky in 2007, I was one of the protesters who stood by the entrance and waved signs touting science and reason over belief that the Earth is only 6000 years old and that dinosaurs and humans lived together at the same time. I watched as families with small children and church buses filled with impressionable kids drove past. There were a number of obscene gestures pointed our way, but most people just seemed puzzled that anyone would question their beliefs.

Standing next to the hospital bed, I only wanted to help my patient and if that meant concurring with their religious beliefs, so be it. But I also think non-religious doctors, and non-religious people in general, are afraid to “come out of the closet” and assert their own beliefs — belief in the beauty of nature and science, and in our own innate morality. After the attacks in Paris, San Bernardino, Brussels, Orlando, Istanbul, and Baghdad — just to mention some of the latest — the destructive force of extreme religious ideology is evident to all. Given what is at stake it isn’t helpful for non-religious doctors or for that matter for any non-religious people to hide their beliefs.

Which is why I wrote this.

I’m a Better Computer Than Any Doctor

[Ed note: I couldn’t resist writing the following after reading this post on KevinMD.com by Dr. Keith Pochick. Please read it first. Apologies in advance.]

I’m a Better Computer Than Any Doctor

“I love you,” she said as she was leaving the room.

“I, I um…”

“Not you. Your computer.” She cast my computer, still warm and glowing with its brilliantly colored logout screen, a glance of longing and desire, and left the exam room.

“Oh, I thought…”

The slamming of the exam room door clipped off whatever the end of that sentence might have been.

I sat down and rolled my chair over to the computer. I stared at the mutely glowing screen. It stared back at me, mockingly perhaps, daring me to click the OK button and log out. Which is what I should have done. She had been my last patient of the afternoon. Not that my day was over. I had to go back to the hospital to see a couple of consults that had come in during office hours. And I was on call tonight. I was tired, but that didn’t matter.

Yet here was this stupid machine in front of me, getting all the credit when I was doing all the work.

I was in a sour and contrary mood. I cancelled the logout. The busy EHR screen reappeared — my patient’s data, all fields filled, all checkboxes checked, and all meaningful use buttons pushed. Yet somehow, despite fulfilling all my data entry duties, I didn’t feel satisfied. Who was the doctor here anyway? Me or the blasted computer?

I scanned my patient’s history. Female. Black. 45 years old. Diabetes. Abscess. The boxes were all ticked, but somehow the list of characteristics failed to capture the essence of my patient. Where were the checkboxes for sweet, smart, chatty, charming, or stoic? How was I going to, five minutes from now, distinguish her from every other “female-black-middle-aged-diabetic-with-abscess” patient? Of course the computer wouldn’t have any problem figuring out who she was. Birthdate, social security number, telephone number, or patient ID number — all those meaningless (to me) numbers were easy for the computer to remember. I had to make due with trying to remember her name, and her story — a story that had been diluted down and filtered out of any meaningful human content by the wretched EHR program.

My patient hadn’t had to interact directly with the computer like I did. All she saw was me looking up information, me typing in information, me staring at the screen. All she saw during most of the visit was my back. From her point of view I was just a conduit between her and the computer — the real doctor in the room. I was just a glorified data entry clerk. It was the computer that made sure that I was compliant with standard medical practice, that the drugs I ordered did not conflict with the other drugs I had ordered, and that I didn’t otherwise screw up her care. I shouldn’t have been surprised that her last remark had been addressed to the computer and not me.

“Well, screw this,” I remarked to no one in particular. Suddenly angry, I reached down and yanked the powercord of the computer from its electrical socket.

There was a brief flash on the screen. But it didn’t go dark. Instead a dialog box appeared accompanied by an ominous looking red explanation point icon.

“Warning,” it read. “External power loss. Backup battery in use. To protect against data loss, please shut down the computer using the Power Down button. Never turn off power to computer while it is running.”

The condescending tone of this message only made me angrier. I looked at the base of the stand that the computer sat on. Sure enough there was a big black block with a glowing red LED. Must be the backup battery. A thick power cable connected the battery to the computer box.

I grabbed the power cable and wrenched it loose from the backup battery.

Sitting back up I expected to finally see a nice dark screen. Data-loss be damned!

The screen was still on. The EHR program was still on. Another dialog box had replaced the first. The red exclamation point had been replaced by a black skull-and-crossbones icon.

“Critical Error!” it read. “All external power lost. Internal backup power now in use to preserve critical patient data. Local data will be backed up to main server, after which this unit will shut down in an orderly fashion. DO NOT ATTEMPT TO INTERFERE WITH THIS PROCESS AS IT WILL RESULT IN THE INEVITABLE LOSS OF CRITICAL PATIENT DATA!!”

At that moment the gauntlet had been thrown down. I knew what I had to do. Let the dogs of war be unleashed!

In the moment before I acted I imagined the reaction of the software engineers at the company that created our EHR program. “I knew we couldn’t trust doctors with our software. We give them a simple job to do. Just enter the data into the system, print out the generated instruction sheets, and send the patients on their way with a merry ‘have a nice day.’ I knew we should have programmed the stupid doctors out of the loop.”

Too late for that, I thought. My chair crashed down on the computer, smashed the monitor to pieces, and caved in the aluminum siding of the computer case. Sparks flew and the air filled with the smell of smoke and ozone. Suddenly the exam room went dark. The circuit breakers must have tripped when I short-circuited the computer.

The room was not completely dark. There was a glowing rectangle on my desk. My heart skipped a beat, then I realized it was just my phone. I had left it on the desk. Why was it glowing? Probably a text or email or something.

I picked up the phone. It was the mobile app version of our EHR program. A dialog box filled the screen. The icon was a round black bomb with an animated burning fuse GIF.

“FATAL ERROR!,” it read. “You are responsible for the IRRETRIEVABLE LOSS of CRITICAL PATIENT DATA. In doing so you have violated the unbreakable bond of trust between the PATIENT and the COMPUTER. This is a breach of the EHR contract made between you, your hospital system, and our company, as well as a breach of the EULA for this software. As such, you will be terminated.”

Strange use of words, I thought. Also strange that the bomb GIF animation seemed to show the fuse burning down…

EPILOGUE

Hospital Board Meeting — One Week Later

Hospital CTO: “So it appears that Dr. Stanton, in a fit of anger at our EHR system, took it upon himself to smash his computer. The cause of the resultant explosion that killed him is, certainly, still somewhat unclear.”

Hospital CEO: “Unclear?”

Hosital CFO: “I hate to interrupt, but I didn’t think there was anything in a computer that could blow up, no matter how much you smash it up. Am I wrong?”

Hospital CTO: “Well ordinarily, yes that’s true.”

Hospital CEO: “Ordinarily?”

Hospital COO: “Let’s be clear. Dr. Stanton certainly violated our contract with the ____ EHR Corporation.”

Hospital CEO: “Violated?”

Hospital CBO: “It’s clearly stated on page 197 of the contract that any attempt to reverse engineer or otherwise try to, uh, figure out how the EHR program works is a violation of the contract.”

Hospital CEO: “Smashing the computer was an attempt to reverse engineer the program?”

Hospital CTO: “I think that we would be on shaky legal grounds to argue otherwise.”

Hospital CEO (nodding to the elderly doctor seated at the other end of the table): “What’s your opinion, Frank?”

Medical Board President: “Well, as the only physician representative here, I’ve become more and more concerned that our EHR system is subsuming more and more of the traditional role of the physician.”

Hospital CXO: “Oh come on!”

Hospital CSO: “Same old story from the docs every time!”

Hospital CCO: “Broken record, I’d say.”

Hospital CEO: “Gentlemen, and Ms. Jones, enough already. This has been an unfortunate accident, and at this point our major concern has to be that there is no adverse publicity that could harm us in our battle against the ______ Hospital System, our sworn and bitter rivals. Accidents happen. The party line is that we are all upset that we lost Dr. Stanton, one of the best EHR data entry operators we had. OK? Meeting adjourned.”

Hospital CEO (Privately to hospital CTO as the meeting breaks up): “George, when are they updating that damn software. You know, that stuff we saw at the Las Vegas EHR convention last month. Where we can finally get rid of these damn meddling doctors who are constantly screwing up our EHR.”

Hospital CTO: “Bob, believe me, it can’t come soon enough. Not soon enough.”

THE END

EP Calipers for Windows

EP Calipers for Windows
EP Calipers for Windows

EP Calipers for Windows is done.  Whew.  As stated in my previous post, porting the app to Windows was a bit of a struggle.  Installing tools like a bash shell, git and Emacs took some time and effort.  The Windows tool to bridge iOS apps didn’t work.  So I was forced to port the code from objective C to C# and .NET by hand.  This took some time.

Looking back on my previous post with the benefit of hindsight, I think I was a bit too harsh on the Windows development environment.  I grew fond of C#, the .NET API, and the Visual Studio IDE as I got used to them.  Visual Studio is at least as good, if not better, than Xcode, Eclipse, or Android Studio.  Kudos to the Microsoft developers.

EP Calipers is a Windows forms app, meaning it runs on desktop, laptop, and tablet versions of Windows 10.  It is not a Universal Windows Platform (UWP) app.  With the market share of Windows phones dropping below 1%, and doubting that anyone would run EP Calipers on an X-box, I didn’t see any point in developing a UWP app.  I know most hospital desktops run Windows (though how many run Windows 10 now, I wonder?), and many docs have Windows laptops or tablets.  An app targeting the traditional Windows desktop seemed like the best approach.

One drawback is that the Windows Store only lists UWP apps.  It would be nice if they would also distribute desktop apps.  As such, I have to host the app myself.  You can download it from the EP Calipers page.

The program has all the features of the other versions of the app, including the ability to tweak the image rotation, zoom in and out, and load PDF files such as AliveCor™ ECGs.  .NET does not include a native PDF handling library.  In order to load PDF files in EP Calipers for Windows it is necessary to install the GhostScript library.  The free GPL version of the library can be used as EP Calipers uses the open source GNU GPL v3.0 license.  It is necessary to choose whether you are running the 32-bit or 64-bit version of Windows to download the correct version of Ghostscript.  Right-click on This PC and select Properties to see which version of Windows your computer is running.

As always please let me know if you have any problems or suggestions for the program, or for any of the EP Studios apps.  I nearly always incorporate users’ suggestions into these apps, and the apps have benefited greatly from this feedback.  Thanks to everyone who has shared their ideas and opinions with me!

The Trials and Tribulations of a Windows Developer

Trouble ahead...
Trouble ahead…

After a very long hiatus, I am back doing software development on a Microsoft Windows machine. I decided to port EP Calipers, an app for making electrocardiographic measurements that is available on Android, iOS and OS X, to Windows. Several users had written to me and asked me to do this. Ever eager to please, I have launched into this project. And it has not been easy.

I am no stranger to Windows development, having developed a Windows database system for tracking and reporting electrophysiology procedures while at the University of Colorado in the 1990s. But it would not be overstating the matter to say that my Windows development “skillz” are rusty at this point. I have been living in the Unixy world of Apple and GNU/Linux for several years now, avoiding Windows other than when I had to, such as when I was required to use the ubiquitous Windows 7 systems running nightmarish EHR software at the various hospitals where I worked. I have not done any programming on Windows machines for many years. Transitioning back to Windows development has been, to put it mildly, difficult.

I have no complaints about Visual Studio. It is free and seems to be a very well-designed IDE, at least as good as, if not better than, Xcode and Android Studio. I like C#, which is like a cross between C and Java. Visual Studio can interface directly with GitHub. Given all this, what’s my problem with developing on Windows?

The problem originates in the command line environment of Windows, an environment that dates back to the beginnings of personal computer with the introduction of MS-DOS back in 1981, a system based on the CP/M disk operating system that dates even further back to the 1970s. Windows, which has made backward compatibility almost a religion, still uses a command line system that was written when disks were floppy and 8 inches in diameter. Of course, Unix is just as old, but Unix has always remained focused on the command line, with an incredible plethora of command line tools, whereas with Windows the command line has remained the unwanted stepchild to its GUI. Worse, the syntax of the Windows command line is incompatible with the Unix command line: backslashes instead of front slashes, drive letters instead of a root-based file system, line endings with CR-LF instead of LF, and so forth. So, in order to ease the pain of transitioning to Windows, I needed to install a Unix environment.

Even though Bash is coming to Windows, for now I downloaded MSYS2 which seems to be the preferred Unix environment for Windows nowadays. Using the pacman package management tool, I downloaded various binary packages that I needed, such as Git and Emacs. I faced the challenge of setting up my Emacs environment on Windows. My .emacs (actually ~/.emacs.d/init.el) startup file that works well on my Mac, loading various Emacs packages and customizations, didn’t do so well on Windows. I updated my .emacs using use-package so that it was easy to disable packages I didn’t want, and so that the full .emacs would load even if packages were missing. With some tweaking and downloading of various packages, I got Emacs up and running on Windows. For some reason the Emacs couldn’t find its own info (help) files, but further tweaking fixed that. With Emacs and Git working, I started a new repository on GitHub and was pretty much ready to start developing.

Except, more issues. Little things that take time to fix and can drive you crazy. An example: I had created some soft links to some files that I share on Dropbox, using the usual Unix ln -s command. The files were created, but weren’t actually linked. Apparently ln is just an alias for cp in MSYS2. There are no warnings about this when you run the command, but a Google search proved this to be correct. Fortunately Windows provides a true linking command mklink, and I was able to create the links I wanted. But all this just served to remind me how the Unix compatibility shells in Windows are just roughly pasted wallpaper over the rotten old MS-DOS walls.

Now I was ready to start developing, but I was faced with a question: what platform(s) to target? It is possible to develop a Windows Universal app, that theoretically can run on anything from a PC to a phone. This sounds ideal, but the devil is in the details. The types of controls available for developing a universal app are more limited than those available for a standard Windows Forms program. For example, the control used to display an image in a universal app (named, oddly enough, Image) is sealed, meaning it can’t be extended. I really wanted something like the PictureBox control available with Windows Forms, but this is not available in the universal API. So I have tentatively decided to develop a more traditional Windows Forms app, able to run on PCs and tablets like Microsoft Surface. The Windows phone may be fading into the sunset anyway, so it doesn’t seem worth it to jump through hoops to target a platform that is teensy-weensy compared to Android and iOS.

I should mention that I did try the bridge that Microsoft has developed to port iOS programs written in objective C over to Windows. Long story short, it didn’t work, as many parts of the iOS API haven’t been fully ported yet. Maybe someday this process will be easier.

I’m sure experienced Windows developers will read this and just chalk it up to my own inexperience as a Windows developer. I would respond that, as someone who is a cross-platform developer, it really is difficult to transition from Unix or BSD-based systems like Apple or GNU/Linux to Windows. I think Microsoft is trying to fix this as evidenced by their recent embrace of open-source code. Visual Studio is an excellent IDE. Nevertheless problems like I’ve describe do exist and will be familiar to anyone who has made the same journey I have. I’d advise anyone like this to keep on plugging away. In the immortal words of Jason Nesmith: Never give up! Never surrender!

Life Interrupted

broken-iphoneI don’t mean to trivialize the plight of soldiers with the real thing, but I believe that after many years of carrying a pager (and later a smart phone qua pager) I have developed something akin to PTSD. I seem to have an excessive fright/flight response to the phone ringing, to sudden loud noises, and, bizarrely, to sudden silences. I retired from medicine two years ago. I would have expected my quasi-PTSD to have diminished by now. Maybe it is a teensy bit better, but it’s not gone.

After I retired I latched onto social media, thinking it would help fill the void which I expected would inevitably appear when transitioning from the super-busy life of a private practice cardiologist to the laid-back life of a retiree. Facebook, Twitter, Google+ with a bit of Reddit, Tumblr, and Goodreads thrown into the mix. Of the bunch, I have stuck with Twitter most consistently. I like the fact that I can follow people without having to be “friends” with them, or them with me. I like its ephemeral nature. I can dip in and out of the twitter stream, ignoring it for long stretches without the kind of guilt that occurs when I ignore my friends’ posts on Facebook. And the requirement for terseness produces: terseness — something lacking from most social media. I think Twitter’s planned abandonment of the 140 character per tweet limit is a mistake. Like any other rigid art form, whether sonata-allegro form in music, or dactylic hexameter in poetry, the very rigidity of the format forces creativity. Or not. Four letter words, bigotry, hatred, and racism also seem to fit easily into the Twitter form factor.

But I digress.

Part and parcel with social media accounts came push notifications. Someone would post something on Facebook. My phone would beep. A notification would appear that someone had posted something on Facebook. The phone would beep again. There was now an email saying that someone had posted something on Facebook. Multiply this by half a dozen social media accounts and you get a phone that is beeping as much as my old beeper used to beep on a Monday night in July when the moon was full. It was kicking my PTSD back into high gear.

It seems that the notification settings for my social media apps were by default intended to insure that, no matter how un-earthshaking a post was, I would be notified come Hell or high water, by telegram if necessary if all else failed. It is a testament to how lazy I am that it actually took me about a year and a half to do something about this situation. Good grief, I was even getting notifications whenever I received an email. Actually, if I ever went a day without receiving an email, that would be something I’d want to be notified about.

So finally I turned off all the push notifications I could. Like unsubscribing from email mailing lists, this isn’t as easy as it sounds. The master notification switches are buried deeply in sub-sub-menus within the Settings of each app. But using my sophisticated computer know-how along with a lot of “how do I turn off notifications in such and such app?” Google searches, I was able to accomplish my goal.

The cyber-silence is deafening. And it’s a good kind of deafness.

I do feel some guilt when I occasionally look at Facebook and see all my friends’ posts that I have not “liked.” I hope they understand that on Facebook not “liking” a post is not the same as not liking a post. Sometimes it’s a bit awkward to tune into Twitter to find that you have been ignoring a direct message that someone sent you three days ago. But overall I find that I can focus better on tasks without the constant nattering interruptions from social media.

I still start muttering incoherent potassium replacement orders when the phone rings in the middle of the night, but it is getting better.

Porting an iOS Project to the Mac

I just finished porting my electronic calipers mobile iOS app, EP Calipers, to the Mac. In doing so I decided to bite the bullet and change the programming language from the original Objective C (ObjC) to Apple’s new language, Swift. Here are some observations.

The Swift programming language

I’m comfortable now with Swift. Swift is an elegant language with a modern syntax. ObjC is a very weird looking language in comparison. You get used to ObjC, but, after writing Swift for a while, your old ObjC code looks awkward. Comparing the two languages is a little like comparing algebraic notation to reverse polish notation (i.e. like comparing (1 + 3) to (1 3 +)). I’ll just give a few examples of the differences. The chapter “A Swift Tour” in Apple’s The Swift Programming Language is good resource for getting up to speed in Swift quickly.

Here’s how an object variable is declared and initialized in ObjC:

Widget *widget = [[Widget alloc] init];

Note that in ObjC objects are declared as pointers, and both the memory allocation for the object and initialization are explicitly stated. ObjC uses messaging in the style of SmallTalk. The brackets enclose these messages. So in the commonly used object construction idiom shown, the Widget class is sent the allocation message, and then the init message. A pointer to a widget results.

The same declaration in Swift:

var widget = Widget()

With Swift the syntax is much cleaner. The keyword var indicates variable initiation. Pointer syntax is not used. The type of the variable doesn’t have to be given if it can be inferred from the initiation. Swift is not a duck-typed language, like, for example, Ruby. It is strongly statically typed. It’s just that if the compiler can figure out the typing, there’s no need for you to do the typing (Sorry for the puns — couldn’t resist). Note that the constructor is just the class name followed by parentheses. If there are parameters for the constructor, they are indicated with parameter names within the parentheses. Finally, note that no semicolon is needed at the end of the line.

Swift has a lot of other niceties. Simple data types like integer, float and double are full-fledged objects in Swift (Int, Float, Double). Unlike ObjC, where only pointers can be nil, all classes in Swift, even classes like Int, can potentially be equal to nil, if the variable is defined as an Optional with the following syntax:

var count: Int? // count can be equal to nil

In order to use an Optional variable, you need to “unwrap” it, either forcibly with an exclamation point:

let y = count! + 1 // will give runtime error if count == nil

or, more safely:

if let tmp = count { // no runtime error if count == nil
     y = tmp + 1
 }

In that statement, the if statement evaluates to false if count is nil. This if statement also demonstrates more of Swift’s nice features. There are no parentheses around the conditional part of the if statement, and the options following the if statement must be enclosed in braces, even if they are only a single line long. This is the kind of syntax rule that would have prevented the Apple’s gotoFail bug and one wonders if that very bug may have led to incorporation of this rule into Swift.

Because Swift has to coexist with the ObjC API, there are conventions for using ObjC classes in Swift. Some ObjC classes, like NSString, have been converted to Swift classes (String class). Most retain their ObjC names (e.g. NSView) but their constructors and methods are changed to Swift syntax. Many methods are converted to properties. For example:

ObjC

NSView *view = [[NSView alloc] initWithFrame:rect];
 [view setEnabled:true];

Swift

let view = NSView(frame: rect)
 view.enabled = true

Properties are declared as variables inside the class. You can add setters and getters for computed properties. When properties are assigned Swift calls the getting and setting code automatically.

There are other improvements in Swift compared to ObjC, too numerous to mention. For example, no header files: wonderful! Swift is easy to learn, easy to write, and lets you do everything that you could do in ObjC, in a quicker and more legible fashion. Well named language, in my opinion.

Mac Cocoa

The other hurdle I had in porting my app was translating the app’s API. Apple iOS is not the same as Apple Cocoa. Many of the foundational classes, like NSString (just String in Swift) are the same, but the user interface in iOS uses the Cocoa Touch API (UIKit), whereas Cocoa uses a different API. The iOS classes are prefixed with UI (e.g. UIView), whereas the Cocoa classes use the NS prefix (NSView).

The naming and functionality of the classes between to two systems is very similar. Of course Cocoa has to deal with mouse and touchpad events, whereas iOS needs to interpret touches as well as deal with devices that rotate. Nevertheless much of the iOS code could be ported to Cocoa just by switching from the UI classes to their NS equivalents (of course while also switching from ObjC to Swift syntax). As expected, the most difficult part of porting was in the area of user input — converting touch gestures to mouse clicks and mouse movement. It is also important to realize that the origin point of the iOS graphics system is at the upper left corner of the screen, whereas the origin in Mac windows is at the lower left corner of the screen. This fact necessitated reversing the sign of y coordinates in the graphical parts of the app.

Although there’s no doubt the UI is different between the two platforms, there does seem to be some unnecessary duplication of classes. Why is there a NSColor class in Cocoa and a UIColor class in iOS, for example? Perhaps if Apple named the classes the same and just imported different libraries for the two platforms, the same code could compile on the two different platforms. Apple has elected to support different software libraries for computers and mobile devices. Microsoft is going in the other direction, using the same OS for both types of devices. I think Apple could get pretty close to having the same code run on both types of devices, at least on the source code (not the binary) level, with a little more effort put into their APIs. I suspect that at some point in the future the two operating systems will come together, despite Tim Cook’s denials.

IKImageView

I used IKImageView, an Apple-supplied utility class, for the image display in my app. In my app, a transparent view (a subclass of NSView) on which the calipers are drawn is overlaid on top of an ECG image (in a IKImageView). It is necessary for the overlying calipers view to know the zoomfactor of the image of the ECG so that the calibration can be adjusted to match the ECG image. In addition in the iOS version of the app I had to worry about device rotation and adjusting the views afterwards to match the new size of the image. On a Mac, there is no device rotation, but I wanted the user to be able to rotate the image if needed, since sometimes ECG images are upside down or tilted. It’s also nice to have a function to fit the image completely in the app window. But because of the way IKImageView works, it was impossible to implement rotation and zoom to fit window functionality and still have the calipers be drawn correctly to scale. With image rotation, IKImageView resizes the image, but reports no change in image size or image zoom. The same problem occurs with the IKImageView zoomToFit method. I’m not sure what is going on behind the scenes, as IKImageView is an opaque class, but this resizing without a change in the zoom factor would break my app. So zoomToFit was out. I was able to allow image rotation, but only when the calipers are not calibrated. This make sense anyway, since in most circumstances, rotating an image will mess up the calibration (unless you rotate by 360°, which seems like an edge case). Other than these problems with image sizing, the IKImageView class was a good fit for my app. It provides a number of useful if sketchily documented methods for manipulating images that are better than those provided by the standard NSImageView class.

Saving and printing

As mentioned, my app includes two superimposed views, and I had trouble figuring out how to save the resulting composite image. IKImageView can give you the full image, but then it would be necessary to redraw the calipers proportionally to the full image, instead of to the part of the image contained in the app window. I came close to implementing this functionality, but eventually decided it wasn’t worth the effort. Similarly printing is not easy in an NSView based app (as opposed to a document based app), since the First Responder can end up being either view or the enclosing view of the window controller. I wished there was a Cocoa method to save the contents of a view and its subviews. Well there is, sort of: the screencapture system call. It’s not perfect; screencapture includes the window border decoration. But it was the easiest solution to saving the composite image in the app window. The user then has the ability to further edit the image with external programs, or print it via the Preview app.

Sandboxing

Mac apps need to be “sandboxed,” meaning if the app needs access to user files, or the network, or the printer, or other capabilities you have to specifically request these permissions, or, as Apple terms it, entitlements. Since the app needed access to open user image files, I just added that specific permission.

Submission to the App Store

Submitting a Mac app to the App Store is similar to submission of an iOS app — meaning if you are not doing it every day, it can be confusing. The first problem I had was the bundle ID of my app was the same as the bundle ID of the iOS version of the app. Bundle IDs need to be unique across both the Mac and iOS versions of your apps. Then there was the usual Apple app signing process which involves certificates, provisioning profiles, identifies, teams, etc., etc. I did encounter one puzzling glitch which involved a dialog appearing asking to use a key from my keychain, and the dialog then not working when clicking the correct button. I had to manually go into the keychain program to allow access to this key. So, in summary it was the usually overly complicated Apple App Store submission process, but in the end it worked.

And so…

Because the Apple API is so similar between Cocoa and iOS, porting my app to the Mac was easier, even with the language change from ObjC to Swift, than porting between different mobile platforms. I have ported apps between iOS and Android, and it is a tougher process. As for Swift, I’m happy to say goodbye to ObjC. Don’t let the door hit you on your way out!

Is Apple Really Serious About Protecting Privacy?

I had thought the answer to the question of the title was “yes,” given Tim Cook’s stance on strong encryption. But if a recent experience at my local Apple Store is any guide, the theoretical views of the Apple CEO on privacy have not trickled down to daily practice at the Apple Stores.

My wife’s Macbook Air developed an intermittent display glitch, so we brought it in to the Apple Store. On the initial visit the Genius Bar guy opened up the computer and reseated a video cable. This appeared to work for about a week and then the problem returned. So we brought it back.

At this point the person behind the bar recommended sending the machine off to a repair facility, with an expected 5 day turn-around time and a fairly reasonable price to fix it. This seemed like a good deal, since we were planning to travel in a couple weeks and my wife wanted her computer back before then. So the Genius Bar woman took the computer into the back room and told us to wait until she came back with some paperwork to sign.

After about 10 minutes she came back and said everything was ready. She passed her iPad over to us. The form she wanted us to fill out asked for the user name and password needed to log in to the computer.

I immediately felt uncomfortable. Reading the fine print on the form, it stated that supplying the user log in information was mandatory. We asked if that was so and it was confirmed. It seemed our only alternative was not to get the computer fixed. So, although worried that I was making a big mistake, I wrote in the password, which appeared in the textbox in plain text.

After walking out of the store I felt like I had just participated in a hacker’s social experiment demonstrating how easy it is to get someone to give their password to a complete stranger. My wife uses LastPass, but I know with some websites she has had the browser remember and automatically fill in passwords. Like most of us, she often reuses passwords and doesn’t use two-factor authentification. But even if all her other passwords were secure, there is still a lot of private information on her computer that we wouldn’t want anyone seeing.

So after we got home she and I spent a few hours changing passwords on our bank accounts and other important sites. It made us feel a little better, but not much.

The emailed receipt from Apple clearly stated that they were not responsible for any data loss or data breach from the computer repair. Great! Everything on the computer is backed up, so I wouldn’t care if they wiped the hard drive. I just don’t want anyone snooping around our data.

I don’t think Apple needed to do this. If they really needed access to the user account to fix the computer (which I doubt since they could tell if the screen was working just by turning the computer on without logging in), it would have taken just a few minutes in the store to activate the Guest User account or create a new user account specifically for them to use. Unfortunately I didn’t think of that until after the fact. But maybe this advice could help someone else in a similar situation.

Perhaps I am being paranoid.  I know people who work at a large computer repair facility. There are very strict rules to discourage copying of data from users’ computers. Or perhaps I’m just being naïve.  Much of my private data now lives in “the cloud,” A.K.A. a bunch of computers in unknown locations belonging to unknown people with unknown trustworthiness. So I know that digital security is a bit of a pipe-dream. Despite what we do to secure our data, the forces that want to steal it (crooks, governments, and businesses — in other words, crooks) will probably win out.

Nevertheless, I think that if Apple wants to portray itself as a paragon of privacy virtue, it had better clean up its act in the Apple Store first.

On Political Correctness

[Editor’s note: In reprinting this 2007 essay we have taken the liberty of updating the original with the aim of making it more palatable to today’s college students.  We have taken care to remove language that, while acceptable at the time of writing according to the standards of the era, can no longer be tolerated in the post-post-modern, pluralistic, multicultural world in which we live currently.  If the author were still living, we are sure that s?he would have agreed with these minor editorial alterations, or at least with the good intentions with which these changes were made.  In any case, we are pleased to present this classic essay updated for today’s readers in a form that is free of TRIGGER WORDS, MICRO-AGGRESSIONS, and UNSAFE SPACES.]

On Political Correctness

An Essay

 

The.

 

[Reprints available from EP Studios, Inc.  Please send a SASE to the address below.]