These images are routine in modem medicine, but at the start of the century none of them was attainable because, with one exception–X-rays–the technologies did not exist. If we think of the inside of the living body as a planet awaiting exploration, the area mapped in 1901 was akin to the geography of Cape Cod. The rest of the human planet awaited surveyors.
X-rays, discovered in 1895 by Wilhelm Roentgen, were still a novelty in 1901 when President William McKinley, standing before a crowd at the Pan American Exhibition in Buffalo, N.Y., was shot in the abdomen. No one suggested trying to find the bullet with the X-ray machine that gleamed in the exhibition’s Hall of Science 50 yards away. In 1901 X-ray machines depended on gas-filled glass tubes that often shattered, and patients frequently suffered burns–some from the vagaries of portable electric generators and some, we know now, from the effects of ionizing radiation. During the eight days McKinley lingered, his doctors, fearing to risk his life and their reputations, eschewed the newfangled instrument.
Others with greater vision realized the significance of X-rays. Just two months later Roentgen won the first Nobel Prize in Physics. Corporations like General Electric invested heavily in developing the technology. At GE, William David Coolidge first improved the light bulb by replacing carbon with a tungsten filament.
Then he put tungsten filaments in X-ray tubes. They were better, but remained fragile until Coolidge pumped out the gas. This vacuum X-ray tube was clinic-ready in 1913. Stable, reliable and safer because there was less scattered radiation, the Coolidge tube brought radiology into routine medical practice just in time to go to the front in World War I.
Like most modern wars, World War I was good for medicine. By 1917 every belligerent army had mobile X-ray units. Occasionally, medics saw an odd phenomenon that presaged the next wave of imaging improvements. The accidental introduction of air into a wounded soldier’s head made the inside visible on an X-ray. In 1919 an American brain surgeon deliberately injected air into a patient’s skull, X-rayed it and demonstrated the effectiveness of air as a contrast agent. Other contrast agents followed, including iodine, which is opaque to radiation. French doctors safely injected iodine suspended in oil to get images of the interior of the spine.
In the 1920s doctors were frustrated by their inability to see beneath the rib cage into the lungs and heart. The solution came from the observation that X-rays of pregnant women did not show a fetus if the fetus was moving. Working separately, four inventors patented machines in which either the X-ray source or the patient, or both, would move, thus blurring out bones on top of the organ doctors wanted to see. These images were called tomographs–a word coined from the Greek tomo, for section or slice.
World War II was even more fruitful for medical progress, thanks in part to the invention of the atomic bomb. It led to a proliferation of nuclear reactors, which were used to produce radioactive isotopes. The Atomic Energy Commission began distributing these isotopes to hospitals in 1946. In the early 1950s physicians injected consenting patients with small amounts of short-lived radioactive molecules that they could follow with Geiger counters as the molecules traveled through the bloodstream. The resulting “images” were often no more than zigzagging tracks on carbon paper.
Twenty years later, medical physicists, using advanced radioisotope technology, would inject radio-labeled chemicals into the bloodstream. The incredible result: a time-release photograph of metabolic function–the body in action. This imaging technology, called PET, can today let doctors watch the flow rate of blood or watch how the brain uses energy to add or subtract or remember a name.
But PET’s journey from a crude scribble to a computerized section of a living brain required the discovery of the correct mathematical formulas to process the data. In the mid-’60s, many of its pioneers did not believe that computers, with their limited memories, would ever be able to handle the mass of data needed to create these images. And so began the search for the right equation.
The search became the obsession of Alan Cormack, a South African physicist who moved to Tufts University in 1962. While filling in briefly in a hospital radiology department several years earlier, Cormack had been surprised at the excess radiation patients absorbed during radiation therapy. He figured that if narrow beams of X-rays passed through a body, the density of each kind of tissue they traversed could be found by subtracting the amount of radiation passed through from the known quantity of radiation each beam carried. This information could be used to form an image of a slice within the body, which, with the eventual aid of computers, would reconstruct from that a seemingly three-dimensional image. In 1963 he demonstrated a crude prototype of what became the CT scanner. But no one seemed interested.
At almost the same time in England, Godfrey Hounsfield, an engineer at the music company EMI, realized he could get an image of a slice inside an object by sending X-ray beams through it from a variety of angles. He knew that it would take thousands of mathematical equations to reconstruct an image from this data, but he was confident computers could handle them. EMI, flush with profits from Beatles records, agreed to fund the project. Hounsfield demonstrated the first CT machine in London in October 1971. Crude and slow by today’s standards, it was nevertheless good enough to reveal a brain tumor and save a patient’s life. Within a year American hospitals raced to buy EMI’s machines and companies raced to patent their own. In 1979 Cormack and Hourisfield shared the Nobel Prize in Physiology or Medicine for the invention of computerized tomography.
Almost immediately PET researchers, using the CT computer formulas, got excellent images. Within the same decade, chemists applied these formulas to an altogether different kind of machine. Magnetic resonance imaging reconstructs images from data generated by protons inside the nuclei of hydrogen atoms. When placed in a strong magnetic field and bombarded with radio waves, these protons emit signals that computers can turn into a picture.
MR imaging was a major step forward. The problem with CT scans was that they were based on X-rays, which could not make images of the inside of bones. MR could image slices of soft tissue and get details within the bones themselves. PET, which followed MR imaging into the clinic, shows slices of functions inside the body from radioactive isotopes. By the late 1980s computerized imaging had reached ultrasound, which records data from sound waves in images of blood flow and the movement of human fetuses inside the womb. By the late 1990s the digital revolution had caught up with the original X-rays, which today are increasingly being produced directly on computer screens without the intermediary stage of film.
The medical advances from all this have been enormous, as have been the bills. But the cost in anguish and dollars to patients has been reduced with the virtual disappearance of exploratory surgery. Money saved in the operating room seems instead to have gone to purchase these expensive imaging machines.
Today the human planet is almost completely mapped, and medical imagers are focusing on manufacturing cheaper, faster models that unify different technologies. Corporate economics no longer fosters much long-term research. Great corporate laboratories of the past, from GE to EMI, are shadows of their original selves, and medical economics no longer indulges big-ticket items. But faster and cheaper is an incentive, too, and where this will lead may surprise us all.