Inflection Point: The Science of Healthcare Delivery and How We Save the Medical Profession
Robert Groves, MD
It can’t be more obvious: Healthcare is in crisis.
We know that the service provided is poor by any reasonable measure of customer service. The “evidence-based medicine” movement has started us in the right direction, but we have not done nearly enough: Fewer than half of our opportunities for evidence-based care are ever identified or acted upon. For patients, the odds of receiving recommended care are at no better than the flip of a coin.
Overtreatment is also a problem. There is well-documented evidence that we provide unnecessary care, care that doesn’t add value, and in some cases even leads to harm--not to mention unnecessary expense. This is not good enough for those we love; it is unacceptable for those we serve.
We know that healthcare is on an unsustainable financial path. Variability in cost of services--through decreasing in some markets and among some providers--is still widespread, and unexplained by disease type or severity or even patient socioeconomics. Some areas of the country spend three times as much as others on a given treatment, with no discernible difference in quality.
Preventable tragedy
For me, the issue of care reliability is personal. My mom was a competitive tennis player. She hurt her back moving furniture, and her level of activity decreased significantly.
Soon after, as she was recovering, she developed difficulty breathing. Worried, she made a Friday afternoon appointment with, by reputation, one of the best internal medicine specialists in town. He evaluated her thoroughly and concluded that, because of her pending divorce, she was most likely having anxiety--but to be sure, he ordered a lung scan to look for blood clots. He ordered it for early Monday morning.
My mother was found that Sunday on the floor in her bedroom. She was 69 years old.
At her autopsy, the cause of her death was found to be a massive blood clot in her lung. The physician, of course, was devastated--even more so because a member of his team had checked her oxygen saturation, which had dropped well below normal with a simple walk down the hall. But somehow that information never got to the doctor.
The nurse who had checked her oxygen was also wracked with guilt. She had believed the doctor had seen the result. No one escaped the pall of grief and regret.
I can’t help but think that her death was preventable. Why wasn’t there a protocol for information transmission in place? What if the approach to suspected pulmonary embolus were standardized? Would the saturation have been noted? Would the scan have been ordered urgently, even though it was after-hours on Friday afternoon?
I don’t know the answer with certainty. But I know that once patients with suspected blood clots start treatment, death is rare. Healthcare delivery quality can be improved, and we know how to do it.
The science of reliability
Process Improvement has revolutionized society, and medicine.
We already know what “new” science is required to significantly improve our performance, to address the profound system dysfunction. We have seen numerous examples from other industries, some of which, like healthcare, are complex and nonlinear, such as the automotive industry, manufacturing, and information technology. And we now have substantial evidence from pockets of excellence within healthcare itself. That “new” science is the “science of reliability,” or more specifically, when applied to what we do, the “science of healthcare delivery”: statistical process control, continuous quality improvement, and the rigorous management, and the act of simplifying and standardizing, to whatever degree possible, every single healthcare process. As in other industries, this targeted effort towards reliability has been shown to facilitate both efficiency and efficacy, allowing us to “get it right” far more often. Moreover, improvements in quality and efficiency invariably lead to a reduction in overall cost.
Turmoil can create opportunity: to fully embrace this “new” science, and apply our passion and curiosity as healers and scientists to today’s biggest challenge--the challenge of making healthcare affordable and convenient, while drastically improving reliability and quality. But change is not easy. It turns out the older science (and one with which we are far more familiar), the science of medicine, didn’t get off to a clean start either.
Medicine’s rocky start
As recently as the early 19th century, there was little that physicians could actually do to change the course of disease and injury. Medical education for physicians was not much more than an apprenticeship. The medical schools, if we can call them that, were little more than a way for physicians to make a little extra money and ultimately pass along to their charges the ability to do the same. There were no educational standards and no academic requirements. If one wanted to become a doctor, one would simply pay the money and follow another physician around adopting his practices. The more fortunate might get a little exposure to other physicians to learn a little physiology or anatomy. Practicing physicians, in turn, would get some money for their time to train would-be practitioners, but there was no teaching role, no academics, and certainly no quality control or improvement of the educational process.
So what kinds of things might an apprentice physician learn in that era? Surgery, before the introduction of anesthesia and antisepsis later that century, was a quick and brutal process. Time was of the essence, because there were few available means of mitigating the excruciating pain--a little alcohol, perhaps some opiates, but patients were essentially awake for amputations and procedures such as removal of tumors of the breast or other superficial areas of the body. (It is no wonder that patients pursued surgery as an option only once the pain had become more intolerable than the treatment.) Mortality rates were very high; generally accepted mortality for amputations of 50%. Typically, the operating theatre was covered with sawdust to absorb the blood, and the physician wore a “frock” coat to catch the blood spatter (which was rarely, if ever, washed, or even removed from the operating theatre.) Surgeons washed the blood off after they were done for the day. Surgical wards stank of pus, and the prevailing theory was that transgressing the skin led to oxidation of the tissues (and that this oxidation led to infection, and thus the staggering mortality.)
The mid-century introduction of nitrous oxide and ether truly was an amazing development, though even that took time for physicians to accept and adopt. Nitrous oxide, the first agent to be applied, was literally a party drug that an insightful dentist recognized as potentially useful in his dental practice. But then came ether--and now, more complex procedures could be tolerated, and human suffering drastically improved.
The academic models of surgical intervention and the incidence of complications still wouldn’t budge for some time. Procedures were still mostly quick and always dirty; the horrific mortality rates remained unchanged. Entering the abdomen was simply not done. The surgical complications tended to be lethal for anything more involved than amputations or superficial tumor removal.
The science of medicine is born
In the latter part of the 19th century, the French and the Germans were considered the very best medical practitioners in the world. And based on the great European Universities’ academic models, the intellectual landscape was about to be transformed. In Europe, the microscope was gaining favor; Louis Pasteur used the device while trying to help the wine industry understand why some of their batches went bad while others did not. Pasteur identified in the bad batches (but not in the good) small rod-shaped structures invisible to the naked eye. Though many scientists had known about these structures for some time, the causal relationship had not been established. Pasteur then showed that heating the wine, or “pasteurization” as it became known, could eliminate these rods and save the wine.
(Note: It is interesting how intoxicants drove the early progress in medicine. First nitrous oxide, then ether--both party drugs at the time--and now wine.)
In order to understand how Pasteur’s discovery made the leap into medicine we must turn our attention to Joseph Lister, who was born in Great Britain, and was pursuing a career in medicine during the latter half of the 18th century. Young Joseph was raised in relative privilege, and so was fluent in French and German. His father, Joseph Jackson Lister, was a scientist in his own right, whose hobby was optics, and who solved a critical issue with the microscope that vastly increased its resolution. This confluence of fortune would soon play out to the advantage of the science of medicine. Young Joseph would have all he needed to put the pieces of the puzzle together--he was fluent in the languages of the science of medicine (German and French), and grew up as an expert in microscopy thanks to his father. And, I should mention, he was a very, very dedicated soul.
Lister went about touring the leading medical centers of France and Germany, learning all that he could about the state of the art in medicine. The microscope was the newest tool in medicine, and many scientists were eager to discover new possibilities that had previously gone unseen and so unstudied. Lister knew that microscopy was used extensively, not only in the rigorous German model of bench science, but also increasingly in France. But this “new-fangled device” had not been embraced in the English–speaking world. Dr. Lister, however, began to use it to study the difficult problems of the day. His primary interest was the phenomenon of inflammation and blood coagulation, and he came to the conclusion (using his microscope) that the presence of foreign bodies must be the incident cause of blood coagulation. At the same time, he also continued to ponder the major challenge of the day--the incredible and seemingly intractable problem of surgical mortality.
When Dr. Lister was a professor of surgery at the University of Glasgow, he began discussing his insights with Thomas Anderson. It was Anderson who told him of an article that he had read in the literature written by Louis Pasteur--an article discussing the presence of the rods in bad wine. Lister postulated that these rods that were causing the wine to putrefy might also be the cause of infection, and he indeed identified these “rods” in wound pus. He then made another leap, based on his knowledge of a local phenomenon: A town had been struggling with the stench of sewage, and worse, cows near the sewage were getting sick. The town solved the problem by spraying the area with carbolic acid.
So, Lister had the work of Pasteur, his own observations, and the knowledge of the town that had utilized the carbolic acid successfully--and from this he created a method of antisepsis. He implemented his first trial of this concept on August 12, 1865. The patient was an 11-year-old boy with a compound fracture of the tibia. Lister treated the wound with carbolic acid-soaked bandages and the boy not only lived, but also the leg was saved. The recommended treatment at the time, amputation, carried 45% mortality, so this was big news.
In 1867, Lister published a series in the Lancet medical journal showing a dramatic reduction in mortality using “the Lister approach.” Unfortunately, it was largely ignored. In fact, it took almost an entire generation before the concept was embraced. The United States was particularly slow to adopt. After all, this had nothing to do with surgery. Surgery was about speed and brawn. The process Lister suggested would take at least three times as long. The science was ignored because it did not fit the mental model, and worse, it dramatically interfered with productivity.
So at the end of the 19th century, we now “knew” about anesthesia and antisepsis--and yet, at least in the English-speaking world, nothing much changed.
The turning point
The next chapter in the evolution of medical science is about how a small group of dedicated minds in a single place--far from the celebrated universities of Europe, and far from being considered the best the U.S. had to offer--actually changed the world.
It happened, at least in large part, in Baltimore, MD--considered a bit of a backwater by New York standards, and certainly not a center of excellence. A merchant, financier, and philanthropist named Johns Hopkins left $7 million dollars to found a dream. His dream was that of a hospital and a university under a single leadership.
In 1874, Hopkins left his fortune to facilitate this dream. He entrusted this work to a board (composed mostly of Quakers at the time.) They hired Daniel Coyt Gillman and John Shaw Billings as the leaders of the two entities. Shaw and Billings scoured the schools of Europe and the Americas to try to understand the best way to move forward. They decided to embrace the philosophy of the science of medicine, and to juxtapose this new scientific method and its newly coined “laboratory” side-by-side with clinical medicine. The men that they chose to lead this work, the department chairs, included 6 men--all German-trained. These men would become some of the most celebrated names in medicine, among them William Welsh, Howard Kelly, and the giants of the science of surgery and medicine, William Halstead and William Osler, respectively. The hospital opened in 1889, but there was a problem with the university. B & O railroad stock had tumbled and there was a $500,000.00 shortfall in funds.
Now, there were four trustee daughters who agreed to raise the necessary funds, but they had conditions. They agreed to raise the funds if:
Women were evaluated on equal footing with men.
This new program had to be a graduate school, accepting only those students already university-trained.
There must be high academic standards, so that not just anyone might get in.
The trustees, of course, refused, more than once. But when they became desperate, they relented. The money was raised and the university opened its doors in October of 1893.
The combination was a series of firsts in American medicine:
The first medical school to admit women on equal footing with men
The first medical school to include courses in laboratory sciences
The first full-time teaching faculty
The first combination university and hospital under a single management umbrella
It was a grand experiment. This was the full embrace of the science of medicine side-by-side with care delivery, and it was a remarkable success. William Halstead would go on to radically transform surgery in this country and subsequently around the world. Osler would go on to be knighted after a stint at Oxford and would transform medical teaching. These men truly did change the world of medicine for the better.
When the Carnegie Foundation asked the educator Abraham Flexner to assess the state of medical education in the United States, Johns Hopkins became his model. And when the Flexner report was first published in 1910, the impact on medical education was enormous. Flexner went about the country with funds from the Rockefeller foundation and offered support for those institutions that could meet the new standards. Of course, many could not--and were subsequently closed.
This was an inflection point in the history of medicine. The medical world was transformed, with the United States was catapulted to world leadership.
So, here we are today...
We stand at the next inflection point in history. Much of the history of medicine is interwoven with racism and misogyny. When we create a new future, what framework will we use? What has worked for the last hundred years is in need of change. What will our grand experiment be? Of course, medical universities will continue to have a core responsibility to solve the problems of the future, and bench research remains important. But do we collectively, universities and all, not also have a responsibility to solve the problems of today--problems that threaten the very integrity of our delivery system on a national and even international scale?
The answer is yes, we do. What if we were to embrace that challenge with all of our considerable resources? We already know how to start. Today there are really two sciences in our world: the science of medicine, which I’ll call “the what,” and the science of healthcare delivery, which I’ll call “the how.”
The science of delivery is today’s challenge. We know we don’t provide reliable, high-quality care at every encounter. We know that at least half the time, we undertreat. We don’t even do those things we all agree we should be doing. We know that we have extensive quality waste. We have high variability--in overtreatment, over-diagnosis, poor coordination, and poor access.
The good news is, these problems are all solvable. We have the tools to transform our system. All that is left for us to do is to embrace the science of care delivery and harness the wisdom of our teams in academics, in delivery, and in our partnerships with payers and providers. Other industries have shown us that it is possible, even in the face of dynamic complexity, to dramatically improve performance using reliability science, behavioral economics, human factors engineering, rapid-cycle improvement, and technological enablement.This is the science of healthcare delivery, and it offers the only sustainable future for the medical field. It is the single most important responsibility of every component of our fragmented system, including academic medicine. It is our problem to solve. And like surgical mortality in the 19th century, until we solve it, we have little chance of success.