Category Archives: History

George Mines and the Impermanence of Knowledge

George Mines

It was a chilly Fall morning in Montreal. A Saturday, the campus of McGill University was quiet. Students, not much different in 1914 from those of today, were sleeping off their Friday night activities. A cleaning woman entered the Physiology Laboratory to dust the glassware and wash the floors. As she turned a corner she was startled to see a young dark-haired man, sitting in a chair. She recognized Professor Mines, the handsome English scientist whom she had often seen working in the laboratory at odd hours. He appeared to be sleeping. His shirt was open and a strange apparatus was strapped to his chest. Rubber tubing stretched from this apparatus to a table filled with equipment next to him. A smoked paper drum rotated slowly. The needle of the drum was motionless, then suddenly jumped. Startled, she let out a little gasp. “Professor, Professor,” she called out. “Are you alright?” She noted he looked very pale, deathly so. She touched his hand. It was cold.

She ran to get help. The police took Professor George Mines to the hospital. There he briefly regained consciousness, but not long enough for him to explain what had happened. He died later that day. He was 29 years old. During his brief life, he used animal models to describe the physiology of reentry in the heart. He described the mechanism of supraventricular tachycardia in Wolff-Parkinson-White Syndrome long before that syndrome was described. He used a telegraph key to deliver timed electrical shocks to rabbit hearts, inducing ventricular fibrillation which he described without the benefit of an electrocardiogram. He thus was the first to report the existence of the ventricular vulnerable period. Despite all this amazing work, much of what he discovered was little noted at the time, until “rediscovered” by later researchers.

It seems likely that he was the first to induce arrhythmias in a human, long before the field of clinical cardiac electrophysiology. Unfortunately that human was himself, and the result was his own death.

The published papers of George Mines are fascinating to read. His very primitive equipment by today’s standards was more than compensated for by his remarkable ingenuity and keen powers of observation and reasoning.  He described the relationships between conduction velocity and refractoriness in reentry, the existence of an excitable gap, and deduced the reentrant nature of ventricular fibrillation. In one memorable experiment he cut fibrillating tissue into larger and larger loops until he was left with just one circulating wavefront. Amazing stuff! What more would he have accomplished had his life not been cut short?

Back in the days before the Internet, I used to keep photocopies of medical articles in a file cabinet (actually several large file cabinets). In those days of academia I enjoyed going to the stacks of the medical library and randomly reading articles from old bound journals, some dating back to the 19th century.  I learned a lot.  One thing I learned was that science has a problem with collective amnesia.  Discoveries are often forgotten or ignored, only to be rediscovered years later.

Nowadays everything is online. Or is it? Recently I wanted to look up Bazett’s original article on correcting the QT interval for heart rate. It was published in Heart in 1920 (Bazett HC. (1920). “An analysis of the time-relations of electrocardiograms”. Heart (7): 353–370.) These old volumes of Heart have not been digitized and are not online. Such a famous article though is surely reprinted? Indeed it is, on the Wiley Online Library site. I can get a copy of the PDF for $38. Absurd! An article from 1920 costs $38!

Here we see the bitrot of science, the impermanence of knowledge. On the one hand, modern scientific research is largely hidden behind a paywall, so that the poor (in the financial sense) reader must rely on abstracts, news reports, online sites such as Medscape, and presentations at medical meetings to keep up-to-date, instead of a careful reading of research methods and results. On the other hand, our precious scientific heritage, the published papers of previous generations, remains largely undigitized, residing in the dusty stacks of libraries, increasingly ignored by newer generations to whom nothing matters if it is not online. There are some exceptions. The Journal of Physiology has digitized all of its content back to Volume 1 from 1878. But most publishers haven’t bothered doing this.

At least half of early films have been lost. Early TV archives, like those of Dr. Who were routinely destroyed or copied over, resulting in loss of these shows forever. The situation is not so dire with old scientific research. The libraries will remain for a long time, and paper has a good half-life. But the beautiful work of George Mines and those like him, the true pioneers of medicine, will remain largely obscure to future generations unless that work is available online.

Perhaps some portion of the $38 for a PDF copy of a 1920 article could go to that cause.

Relic from Computer History

The M
The M

Sitting on my mantle is a bronze letter M. This M has been in my family as long as I can remember. When I was growing up I didn’t think about where it had come from. I knew it stood for our family name of Mann. Later on I learned the story of the M from my parents.  As it turns out, this particular bronze M is a relic from a bygone era of computer history.

I grew up in the 1950s just outside of Philadelphia, a block north of the city limits. This was an Irish-Catholic neighborhood. Our neighbors all had 9 or 10 kids. Dads worked and moms stayed home. It was a fun time and place to grow up as there were kids to play with everywhere.

Our neighbors to the right of our house were the Williams (we always referred to them as the Williamses). The father worked in construction. He was the one who gave my father the M. The M came from a building that his company was demolishing. For many years that’s all I knew about the M.

Eckert-Mauchly building
Eckert-Mauchly building

When I was older I asked my parents for more details about the origin of the M. The M came from the lettering over the entrance to the Eckert-Mauchly Computer Corporation building, which stood at 3747 Ridge Avenue in Philadelphia in the early 1950s. I have only been able to find one picture of this building. It is low resolution and the lettering is not clear, but certainly the M in my possession looks similar to the M of Mauchly on the building.

During and after the Second World War there was a massive stimulus to science and technology. In England Alan Turing and colleagues developed the “Colossus” computer at Bletchley Park that was used to decode German transmissions encrypted with the Enigma machine. There is little doubt that the intelligence gathered through this effort was instrumental in the Allies’ winning the war.  Sadly, Turing’s reward was prosecution and persecution for his homosexuality that led to suicide with a cyanide-laced apple — one of the most ignominious events in the history of humanity.

Mauchly, Eckert, and UNIVAC
Mauchly, Eckert, and UNIVAC

In America, at the end of the war, John Mauchly and Prosper Eckert joined forces at the Moore School of Engineering at the University of Pennsylvania to develop the ENIAC computer. Mauchly was what today we would call a “software” guy, and Ecklert was the “hardware” guy. Their computer was as big as a house and contained thousands of vacuum tubes.  It worked, though of course its processing power was infinitesimal compared with what we carry around in our pockets nowadays.  After doing computing work for the Army at Penn, Mauchly and Eckert decided to form their own company.   This decision was due to an issue still familiar today: dispute over intellectual property rights with the university. In 1946 they formed the first commercial computer corporation. Originally called The Electronic Controls Corporation, the name was changed to Eckert-Mauchly Computer Corporation (EMCC) in 1948. The company developed several computers that were sold mostly to government agencies such as the Census Bureau.   Of these computers the most famous was UNIVAC. UNIVAC was used to predict (successfully) the presidential election results on TV in 1952. Although we take this use of computers for granted now, at the time this was an amazing feat.  Grace Hopper, the computer pioneer who only recently has been getting the recognition she deserves worked at the EMCC. She went on to develop the first computer language compiler.  Unfortunately the EMCC lost government funding due to suspicions that they had hired “communist-leaning” engineers (this was the McCarthy era), and the company was taken over in 1950 by the Remington Rand corporation, which at the time made typewriters.  Eckert stayed on at Remington Rand (later Sperry, now Unisys), while Mauchly became a consultant.  You can see both of them in all their glorious 1950s nerdiness in this YouTube video.

Marker at the site of EMCC
Marker at the site of EMCC

At some point in the early 1950s the original building was demolished. I have been unable to determine the exact year. And from that building, as far as I know, only the M sitting on my mantle remains.

Prank Calling Kurt Gödel

Kurt Gödel
Kurt Gödel

Prank calling used to be a common, albeit annoying, form of entertainment back in the days when I grew up, before the invention of caller ID ruined it forever. Some prank calls were just simple and stupid jokes, such as the “do you have Prince Albert in a can?” call. On a slightly more elevated level of maturity, there was the anti-corporate “screw the phone company” philosophy of prank calling. As an example, I remember in college my friends and I decided to call Victoria Land in Antartica. When the British operator asked who would pay for the call, we asked that it be charged to Her Majesty the Queen. We were informed very politely that that would not be possible. So we told her to make the call collect to Admiral Byrd. Amazingly she accepted that as legit. She then said it would take two hours to make the connection. Unfortunately, as I recall, we never got through to the good admiral.

Before you get too judgmental about this kind of activity, recall that Steve Wozniak and Steve Jobs got their start together by “phone phreaking,” designing (Steve #1) and selling (Steve #2) so-called “blue boxes” which were used to make long-distance calls without paying. So, as juvenile and even illegal as pranking the phone company might have been, you might not be holding that iPhone in your hand right now if not for it.

The most memorable prank call of all occurred the night some of my friends and I decided to call Kurt Gödel and ask him to help us with our homework. Gödel was a mathematical genius, most famous for his “Incompleteness Theorem.” The essence of this theorem is that in any mathematical system at least as complex as simple arithmetic, there are theorems that are true but can’t be proven. The actual mathematics of his proof are complicated. My limited understanding is that he found a way to translate mathematical statements into numbers (called Gödel numbers) and then show that you can use these numbers to represent a statement that states “this statement is not provable.” If this all sounds like gobblygook, there is a whole book that explains this (and a whole lot more) better than I can, Douglas Hofstadter’s classic Gödel-Escher-Bach, An Eternal Golden Braid. In the minds of many mathematicians and philosophers, there is something mystical in Gödel’s proof. Depending on how you look at it, the fact that there are truths that can’t be proven is either disturbing or profound or both. Some have felt the proof has implications as to whether machines can ever develop consciousness, and the self-referential nature of the proof may even have something to do with our own consciousness.

Textbook for logic class
Textbook for logic class

My friends and I were learning about all this in a logic class taught at Dartmouth in the early 1970s. One of the texts we used in the class was Nagel and Newman’s book, Gödel’s Proof. While struggling though this text, we collectively got stuck on some point that we didn’t understand. Unfortunately I don’t remember the exact question we had, or whose idea it was to call Dr. Gödel to see if he could answer the question. But for whatever reason (possibly fueled by low doses of intoxicants), it seemed at the time to be an excellent idea. Who better to answer a question about Gödel’s proof than Gödel himself?

We knew that Gödel worked at Princeton (where he had been good friends with Einstein), so we called directory assistance for Princeton, New Jersey and obtained his home phone number without difficulty. We then, sitting in a circle on the floor of my dorm room, called him. My friend Bob Lindgren, the boldest of the bunch, made the actual call while we all listened in.

Dartmouth Professors Kemeny and Kurtz
Dartmouth Professors Kemeny and Kurtz

Dr. Gödel answered the phone himself, and we all listened to the tinny German-accented voice with amazement. Bob said we were students at Dartmouth College studying his incompleteness theorem, and we had some questions. Professor Gödel very pleasantly said he would be happy to answer any questions, referring to our school as “Dartmoor,” and asked how his friend John Kemeny was doing. Professor Kemeny was president of Dartmouth at the time, was another colleague of Einstein’s, and was an early computer pioneer, coinventing with Tom Kurtz the BASIC computer language. Of course none of us were on speaking terms with Dr. Kemeny, but that didn’t stop us from reassuring Dr. Gödel that his old friend was doing just fine. We promised we would give him Dr. Gödel’s best wishes the next time we saw him.  We then proceeded to ask our logic questions to Dr. Gödel, who was gracious enough to waste his evening and precious genius explaining simple mathematical concepts to awestruck college kids. I don’t remember many details of the conversation, though I do remember one thing we asked him that may offer some insight into how he worked. We asked him if the idea for his proof came to him all at once as a Eureka moment, or if it was something that developed more gradually. He replied that it was definitely not a sudden insight. Instead it was something that he worked on over many years. He said he had a broad idea where he was going with his idea from the beginning, but it took his filling in the details over a long period of time before he got the result he wanted.

We thanked him for his help and he wished us well. He died a few years later, in 1978. Today in the world of mathematics his work is considered to be comparable in significance to Einstein’s Theory of Relativity in the world of physics.  I am not a mathematician and I find Gödel’s incompleteness theorem difficult to grasp — slippery, self-referential and paradoxical, much like thinking about the nature of consciousness. Maybe the two are related after all.  On a more practical note, Gödel’s story about how he came up with his proof leads to the profound yet common-sense (the two aren’t necessarily at odds) notion that creating something new and wonderful requires more than just good ideas. It requires hard work, and lots of it. This is important to realize, even for those of us who are not geniuses.

All the President’s Tapes

The Nixon Defense
The Nixon Defense

Richard Nixon’s downfall, a.k.a Watergate — a word whose suffix has become a part of the English language, has always fascinated me. In the summer of 1973, poised between graduation from college and the start of medical school, I spent an inordinate amount of time in front of the television watching the Senate Watergate hearings. In those days before 24 hour cable news and CSPAN it was almost unprecedented for the networks to “interrupt our regular programming” and carry such an event live. I remember John Dean’s relating his March 21, 1973 conversation with Nixon, telling him there was a “cancer on the presidency,” a warning that Nixon ignored, instead reassuring Dean regarding the estimated million dollars of hush money that the Watergate burglars wanted that “we can get that … I know where it can be gotten.” I remember Nixon’s top men, Mitchell, Ehrlichman and Haldeman, stonewalling it, denying the president had any knowledge of the cover-up. At the time it looked like it would boil down to Dean’s word against the president’s, with no evidence against the president other than hearsay. Then, on July 13, 1973 a relatively minor character, Alexander Butterfield, an assistant to the president, was called before the Senate committee in closed session. Apparently one of the lawyers on the committee (a Republican) had become suspicious by the amount of detail available relating to notes about a certain White House conversation, and asked Butterfield directly if there was a recording system in the White House. Butterfield, one of only a very few who knew of the existence of the system (Nixon’s top aides, other than Haldeman, did not know about it) had planned not to reveal the system, but faced with a direct question and the threat of perjury, had to answer honestly. So in public session on July 16th, Butterfield was asked the question by Fred Thompson (yes that Fred Thompson, who was a minority counsel for the committee) before all the TV cameras, and to the astonishment of everyone (including me who saw it live) revealed that every conversation and phone call in the Oval Office and in the president’s Executive Office Building was recorded automatically on tape.

The tapes of course are what destroyed Nixon’s presidency, a self-inflicted wound worthy of the most profound Greek tragedy. It is difficult to fathom the hubris of the man who wanted his every presidential conversation preserved for posterity and then went on to discuss with his aides an ever-evolving and increasingly complex cover-up scheme while his secret taping system was recording every word. Nixon eventually had to give up the tapes after the Supreme Court unanimously forced him to do so, and certain of the tapes, like the June 23rd 1972 “smoking gun” tape, in which Nixon has the FBI limit its investigation of the Watergate burglary for “national security” reasons, led immediately to his resignation. Beyond these several infamous tapes, there are hundreds of hours of tapes relating to Watergate that up until this point had never been transcribed or documented. In John Dean’s book The Nixon Defense: What He Knew and When He Knew It these recorded conversations are described and from the book there emerges a more complete picture of Nixon and what happened that led to his downfall.

The June 17th, 1972 Watergate break-in and bugging of the Democratic National Convention headquarters seem to have occurred due to the over-exuberance of certain of Nixon’s cronies who worked in the Committee to Reelect the President (which actually had the acronym CREEP) including former attorney general John Mitchell, born-again post-conviction Chuck Colson, and possibly Nixon’s top aids John Ehrlichman and H.R. “Bob” Haldeman. They had hired Gordon Liddy, a loose cannon if ever there was one, to find out what the Democrats were up to. Nixon, who it is pretty clear did not know of the Watergate activities beforehand, nevertheless set a tone in his administration that dirty politics was the norm and his associates, only too eager to please him, ended up going beyond the bounds of legality to do so. After the Watergate burglars were arrested, from the very start Nixon tried to limit the political damage to himself. After all, he was running for reelection. He also felt he had to prevent his political allies from going to jail. He had a very difficult time in actually firing Haldeman and Ehrlichman, his two top aides, when it became clear he had to do so. In the Nixon-Frost interviews one can almost feel sorry for Nixon when he talks about this. Yet for the most part the recorded conversations reveal a cold, calculating, ruthless character with whom it is difficult to sympathize.

Nixon based his defense around the March 21, 1973 conversation with John Dean, the “cancer on the presidency” meeting. Reading this in the book (or listening to it; the important conversations are on YouTube), it is clear that Dean, though involved in the cover-up initially, was trying to warn the president (he was after all the president’s counsel) that he risked becoming entangled in the Watergate cover-up. Dean revealed the blackmail demands of the indicted Watergate burglars and clearly seemed surprised that Nixon was willing to raise money to pay them off. Later Nixon and Haldeman would claim that Nixon said on that day that “we could raise a million dollars … but it would be wrong,” but that was a bold-faced lie (here is what he really said). Nixon later blamed the cover-up on Dean and said that he (Nixon) started his own personal investigation into Watergate after the March 21 meeting with Dean.  This “investigation” was yet another cover-up created by Haldeman and Nixon.  It is ironic that in the recorded conversations when this March 21 meeting was discussed, Nixon is constantly worried that John Dean had somehow carried a tape recorder on his person during that meeting and had recorded evidence that would show Nixon was lying. Strangely, Nixon seems to have given little thought to the fact that he himself had made a recording, and that this recording would eventually become public, indeed proving that he had lied. Only occasionally did Nixon give any thought to the automatic recording system. At one point he briefly considered destroying the tapes before their existence was discovered, but Haldeman talked him out of it, because of the potential loss to history. Ah, hubris!

The book may not be as fascinating to those who did not live through the era as it was to me.  It is a long book, and for those interested in Watergate in less detail, Woodward and Bernstein’s All the President’s Men or John Dean’s earlier Blind Ambition are good. Nevertheless all Americans should be familiar with Watergate and how the government narrowly avoided a constitutional crisis.  Compared with the governmental dysfunction today, this was an era when the process of government actually worked.  Though Nixon had his defenders amongst the Republicans, as the evidence piled up against him, both parties united in the impeachment process. The Justice Department, the Supreme Court, and the Congress did what they needed to do. Despite the abuse of power in the executive branch, the other branches of government functioned properly and the balance of power built into the Constitution by the founding fathers saved the day. One wonders though what the outcome would have been if Nixon had not recorded himself, or had destroyed the tapes early on.

The Nixon Defense is probably the definitive Watergate book. Nixon was right about his tapes. They are of great historical interest, but not in the way he intended. They reveal a picture of the downfall of one of the most interesting political characters of the 20th century, a presidential reality show that, like most reality shows, can be banal and riveting at the same time.

Memories of Van Cliburn

Van Cliburn
Van Cliburn

In the long struggle between the United States and the Soviet Union, from the end of World War II until the end of the Soviet era in 1991, there were intense moments of high drama, like the Berlin Blockade and the Cuban Missile Crisis, intermixed with moments when the icy hostility melted a bit. With both countries armed to the teeth with nuclear weapons of a power sufficient to destroy out planet many times over and a firm policy on both sides with the ironically apt acronym MAD (Mutually Assured Destruction), the stakes that world leaders were playing with could not have been higher. The path that eventually led to the defusing of this dangerous situation was not direct. Certainly the final act was played out by Ronald Reagan (undoubtedly his greatest role) and Mikhail Gorbachev, but long before that a young Texan, a classical pianist, was one of the first to breach the barriers between the two countries. In 1958 he won the Tchaikovsky Piano Competition in Moscow, the first American to do so. He played two great Russian concertos in the last round of the competition: the Tchaikovsky 1st and the Rachmaninoff 3rd. He won the hearts of the Russian people as well as the judges of the competition. Nevertheless they cleared their decision with Premier Nikita Krushchev. Krushchev reportedly asked them: “Is he the best?” When answered affimatively he stated: “Then he should win.” After the competition he returned home to a ticker-tape parade in New York City, and a full concert schedule. His records (LPs) were all hits, and I personally bought a lot of them. In later years he received some criticism from music reviewers for a conservative repetoire and rote performances, but at his peak he was a tremendous musician. His recordings of the Prokofiev 3rd Concerto and the Rachmaninoff 2nd Sonata are cases in point.

Van Cliburn and Krushchev
Van Cliburn and Krushchev

I first saw him perform live in a concert that I believe took place in 1966 in Philadelphia. He performed 3 piano concertos in one concert with Eugene Ormandy and the Philadelphia Orchestra. The 3 concertos were the Mozart number 25 in C major, the Beethoven 4th, and the Rachmaninoff 2nd. I well remember his appeararnce on stage, sitting very tall and straight-backed on the piano chair, swaying side to side with the music. Playing 3 concertos in one concert was and is quite a feat. It was rebroadcast on the Philadelphia classical music channel (WFLN) a few weeks later and I made a tape recording of the whole concert from my little transistor radio. Over the years I lost all my old tapes. I wish I still had that one. I have never heard of another recording of that historic concert.

Cliburn appeared frequently at the Robin Hood Dell concerts. These were summer concerts performed outdoors in Philadelphia. On these occasions he wore white formal attire. My friends and I attended these concerts and at the end of each concert, went up to stand in the front row to watch Cliburn give a series of encores. We went often enough to know that when he played Chopin’s Polonaise in A flat it would be the last encore of the evening. He was always generous with his encores and gracious to his audiences.

Van Cliburn died on February 27, 2013 at age 78. He played for presidents, world leaders, and for all the rest of us. He was a sorely needed bit of warmth in the midst of the Cold War. By any measure he was a great American and I count myself fortunate that I was able to see him perform in person on several occasions.

Hacking at Dartmouth in 1969

It probably doesn’t say much about my character that when I first encountered the world of computers back in 1969 at Dartmouth College, my thoughts quickly turned to how I could use them for my own subversive goals. Yes, I was an early hacker. This was in the days before Microsoft, before Apple, even before Unix and the C programming language.  And now that there is no doubt that the statue of limitations applies (it’s been 45 years), I am ready to come clean.

Kiewit Computer Center
Kiewit Computer Center

Sitting in a corner of the campus in those days was a low, concrete, windowless, almost bunker-like building whose architecture appeared out of place amongst the Georgian style of most of the other structures. This was the Kiewit Computer Center. Inside was a GE-635 computer, a behemoth that took up a large part of the building. Per the specs on the Dartmouth College site the CPU had a speed of 300 KHz and the storage was on the order of 340 MB. This beast was tended by a group of computer science majors, headed by the two resident computer gods, Tom Kurtz and John Kemeny, the inventors of the BASIC programming language and both professors at Dartmouth (Kemeny later became president of the college).

Inside Kiewit
Inside Kiewit

The operating system was the Dartmouth Time Sharing System (DTSS), an early example of a multi-user OS that allowed the computer to be shared simultaneously among lots of users. I remember entering the Kiewit Center and then turning to the right, where there was a room full of teletypes for students to use. Each teletype printed on a roll of yellow paper. There were no monitors in those days. Each student had access to 2KB of memory and could log on with their student ID number and first 3 letters of their last name as their default password. Computer usage was very much encouraged at Dartmouth. I remember doing Physics homework on the computer. There were also a number of entertainment programs. One was called DATE and was smart-alacky attempt at an artificial intelligence program designed to entertain your date. Dartmouth was all male at the time and dates had to be imported from neighboring New England schools.


Compared to modern programming languages, BASIC was torture, but we didn’t know any better. Single letter variables, line numbers, no source code formatting, GOTOs — ugh! Nevertheless it was easy to learn and write, if not to read. I wrote some BASIC programs, but soon even the at-the-time huge memory space of 2KB began to seem cramped. It was at this point that I decided to turn to a life of crime and figure out how to steal some passwords.

I wrote a program that emulated the sign-on to the the system and left it running on a teletype. The next person who sat down at that teletype would write in his user ID and password, at which point the teletype would turn off, after having written the information to a data file. Sneaky! Doing this I collected a number of passwords, including some from the systems programmers. One fellow even had 307KB of memory!

Part of the Password Stealing Code
Part of the Password Stealing Code

Looking over the program listing (yes I did keep the original listing all these years — chalk it up to a guilty conscience), the code is pretty simple. The DTSS log on message with current date and time are printed out. After entering the user ID, the user types in the password on a line that was previously overprinted with different characters. The log on screen looked like this:

TERMINAL 130 ON AT 13:48 14 JAN 70, 078 USERS
DTSS TILL 20:00. LIST CNEWS*** 13 JAN70.

The clever part of the code is this:

570 LET Q(0)=1
580 LET Q(1)=4
600 PRINT Q$
620 STOP

I discovered empirically that ASCII 4, which is End of Transmission or EOT, would turn off the teletype if printed. In that early BASIC you would define a string as an array, with the first value the length of the string, and then convert the array to a string (indicated by a value terminated by “$”). So, after my victim entered his password, the teletype would die. Since teletypes tended to die randomly anyway in those days, no one thought much of it. They would just turn it back on and start over.

Hacked Passwords from 1970
Hacked Passwords from 1970

Looking at the list of passwords, they are not much worse than those exposed through today’s password security breaches.  Most people didn’t bother changing their default 3 letters of their last name password.  Of course when the default password is known to be the first 3 letters of your last name, it’s clear there was not a lot of concern about security in those days. It was a simpler era, minus the Internet, viruses, worms and Trojan Horses (excepting my little malignant program). Nevertheless, if you, C42769, are still using “TRACE” as your password, I urge you to change it now.

Besides just demonstrating proof of concept, I didn’t really make use of my hacking results. There were no credit cards to steal in those days, and snooping around files containing other students’ Physics homework proved to be unexciting. Sobered by this, I turned away from a life of computer crime and instead became a doctor, though I have maintained my interest in computers and hacking in the non-evil sense of the word .

Yet as I sit here typing this on a portable computer far more powerful than that which filled a whole building at Dartmouth in the 1970s, I still get nostalgic for those cold, snowy New Hampshire days, and the clatter of teletypes.

Futurama Revisited

GM Futurama exhibit 1964 New York World's Fair
GM Futurama exhibit 1964 New York World’s Fair

Fifty years ago my parents took me to the World’s Fair in New York. The year was 1964. I was twelve years old. It was a turbulent time in American history. The prior fall John F. Kennedy had been assassinated, initiating a long period of turmoil for the United States.  But it was still the era of America’s post-war technological greatness. The country was gearing up to fulfill Kennedy’s vision of a manned flight to the moon before the end of the decade. Products were still made in America, and we used the phrase “made in Japan” as a joke to mean something cheap and junky. People had savings accounts, and there were no credit cards. At the same time, racial discrimination and segregation were widespread. There was cringe-worthy sexism present, as anyone can tell by watching movies or TV shows from that era. There was no Medicare. US poverty levels were at an all time high. Lyndon Johnson and Congress went on to address some of these issues with the Civil Rights Act and the Social Security Act of 1965 which created Medicare and Medicaid. Johnson declared the War on Poverty in 1964 and poverty levels did fall. At the same time an undeclared war in southeast Asia was to cast a large shadow over his legacy and over the lives of boys turning 18 through the next decade.

Nevertheless it was a beautiful warm summer day when we visited the Fair. I remember the day well. Having devoured the Tom Swift, Jr. books and then science fiction of the 3 grandmasters, Asimov, Clarke, and Heinlein, I was filled with boundless optimism about the future of technology. The Fair was crowded with Americans that didn’t look much like Americans of today.  Neatly dressed.  Thin.  I was old enough to notice the pretty teenage girls who were just a few years older than I, working summer jobs at the fair. I remember riding up the elevator in one of the saucer-like observation towers (you know them, they play a prominent role in the movie “Men in Black”) and shyly eying the cute girl seated on a stool operating the elevator controls. Yes, for you younger readers, elevators used to be manually operated. The fair made a lot of predictions, but I don’t think automatic elevators was one of them.

The General Motors pavilion was aptly named Futurama. There is a YouTube video showing what it was like. I waited expectantly in the heat in a long line that stretched around the rectangular concrete windowless building. Inside we sat on cushioned chairs that automatically moved through the exhibit. There were vistas of a technologically rich future. Spacecraft exploring the moon. Scientists controlling the weather from a station in Antarctica. And in the environmentally naive outlook of that era, large machines cutting down rain forests to build roads to deliver “goods and prosperity.”

This exhibit was a highlight of the fair. Afterwards we went to the General Electric pavilion where we witnessed a demonstration of nuclear fusion (was it real? I honestly don’t know, and the Internet is vague about it). There was a loud bang and a bright light.  All very impressive, especially at my young age.

There have been a number of recent articles (e.g.  here, here, and here)  about the Fair and about which predictions it got right and which were wrong. Curiously there weren’t any predictions about medical science that I remember. Maybe I wasn’t paying attention. I think I wanted to be an astronaut back then. Pacemakers were brand new and digitalis and quinidine were staples for treatment of abnormal heart rhythms. The huge advances in medicine that were to come between now and then could not even be imagined.

I remember there was some stuff about computers, but at the time a single computer with less memory and processing power than that in my cell phone filled a large room. And yet it’s amazing that level of computing power was able to get us to the moon. The thought that everyone would carry their own personal computer/communicator in their pocket was pretty far-fetched. A few years later in Star Trek Captain Kirk would use something that looked like a flip-phone, but gosh, no capacitive touch screen! It did have a neat ring tone however.

The networking together of the world’s computers (aka the Internet) was certainly not predicted. Having the knowledge of the world a few mouse clicks away is probably the most significant advance of the last 20 years or so. It has altered our lives, I believe mostly for the good (except when I read YouTube comments), in a fashion unimaginable 50 years ago. I’m disappointed that the exploration of space didn’t turn out as predicted. Where are our moon colonies, or our base on Mars? But I’m happy with the way the Information Age has turned out, and I wouldn’t trade my ability to spend an evening browsing Gigliola Cinquetti videos on YouTube for anything.

The social changes that have occurred since then have been significant and generally for the good. Communism has been marginalized and the threat of nuclear war diminished. Religious fundamentalism remains a thorn in the side of humanity, as it has always been. Certainly there is still sexism and racism and we have further to go in correcting social injustice. But if I had told my dad back in the 60s that the United States would elect a black president, I’m sure he would have said something like “That’ll be the day!”

In the Catacombs of Paris

DSC01235One night many years ago I was driving my son Kevin to a hockey tournament in Casper, Wyoming. It was winter and Denver had been hit by a snow storm. Although I had left Denver at a reasonable time, the traffic was very slow, so we didn’t arrive in Casper until very late. At about 1 in the morning, on a lonely road between Cheyenne and Casper, we stopped the car to get out and stretch our legs for just a few moments. It was very cold, certainly less than 10 degrees Fahrenheit. The sky was clear and moonless. There were no lights anywhere. We were miles from the nearest town, and there were no cars on the road at that hour. We looked around us, then looked up.

Persons who live in the city or the suburbs never really see the stars. In the city, you may see the planet Venus and some of the brightest stars, like Sirius. In more rural areas the constellations are outlined, and there is a faint glow from the Milky Way.

At 1 AM in the dead of winter in the middle of nowhere in Wyoming, the stars literally blazed in the sky against a pitch black background. There were more stars than I had ever seen before. Stars between stars, and fainter stars between them. Fuzzy blurs of nebulae. The Milky Way, the edge-on appearnce of our own galaxy, which always looked like a faint haze before, was ablaze. The colors of the stars were unmistakable, from incandescent white to electric blue to fiery red.

Standing there, facing infinity, I could not escape the plain evidence of my insignificance compared to the vastness of the Universe. The experience was overwhelming. It’s unfortunate that people rarely see the stars like that. Realizing our place in the Cosmos helps put into perspective how unimportant our petty problems really are.

DSC01241I visited the Catacombs under Paris today. Underground Paris there is a vast network of mines that were used to obtain the gypsum and other material from which the city was built. The mines are ancient, dating from the 13th century, and, except for the Catacombs, are off limits to the public; in fact it is illegal to enter them. It is estimated that they extend for at least 280 km below the city, though no one knows their true extent. The Catacombs are located in one part of these mines. They stretch for 1.7 km. They are filled with bones. In the late 18th century, Paris’s cemeteries were in disrepair, with burial grounds collapsing. A decision was made to move all of Paris’s dead to the underground mines, creating the catacombs.

It is a bizarre and eerie place. The Catacombs are located deep below the level of Metro tunnels,  just above the water table, in a geologic stratum known as the Lutecien, which dates back 40-48 million years, when Paris was covered by an ocean. Chamber after chamber are filled to the brim with bones.  The bones are neatly stacked, femors alternating with skulls in grotesque patterns. The number of the dead is estimated to be between 6 and 7 million.

DSC01240Looking at these anonymous bones, it is impossible not to have a feeling of smallness similar to that I had that cold starry night many years ago. Each bone belonged to a human being who was born, was a little baby, ran around with his or her friends as a child, grew up, had friends, enemies, neighbors and loved ones, and then died. Each one had a name, now forgotten. Each had hopes, dreams, ambitions, misery, pain, happiness — everything that makes us human. All the things that undoubtedly seemed so important to these people are forgotten and of no importance today. Maybe some were my ancestors (I do have some French antecedents). Maybe a particular femur or skull I saw belonged to someone who married someone which made it possible hundreds of years later for me to be born. It’s all very sobering.

Emerging from the Catacombs, the sun was bright, the sky was blue, and the hustle and bustle of Paris had not missed a beat. In one sense the Universe is unimaginably vast and we are very small, here for just a few heart beats and then gone. In another sense the Universe is just an electrical pattern in our brains, and these electrical patterns, that is, our minds and thoughts, are all that we really experience. When I die the Universe will cease to exist from my point of view, since unfortunately mine is the only point of view that I actually experience.  Maybe our activities among our fellow human beings who happen to live at the same time as we do will not make any difference to anyone hundreds of years from now.  Viewed from a cosmological context, nothing seems to matter.  Viewed from the context of a person living in the here and now, our interactions with each other are weighty and important, at least to us.  But seeing as we are all going to end up like the denizens of the Paris Catacombs one day, maybe we should try to be nicer to each other while at the same time not worry too much about our mistakes.