The majority of patient visits and tests are incurred due to a perception of need, social requirement or guideline-driven follow up.
When we buy healthcare, each of us must ask if it is good as healthcare elsewhere and why should we spend more than we need to for it. If the only answer is that buying an inferior product—access, mortality and morbidity statistics are not better in the U.S. than elsewhere—helps insurers, pharmaceutical manufacturers, hospitals and others employ more people while asking us to pay more taxes to help the uninsured then it is time to rethink. Contrary to free market principles, the consumer is unable to see a price list. No physician can give one, as price varies with each insurance carrier.
Often forgotten fact: Less than 1% of healthcare involves hospitalization. Likewise, debate centered on end-of-life and family planning issues ignore the major expenditures. Use of a healthcare system derives from its benefits and many services such as school and employment are mandated. Others are advised for prevention (annual physical, Pap smear, vaccination), and others for follow up of chronic conditions. We must consider whether utilization of facilities is being driven by misperception or requirements that strain our budget. We must be sure that care is delivered in the appropriate cost environment?
On definitions and primary care and insurance for all: Insurance is setting aside enough for an ultimate need (a form of management used to hedge against the risk of a loss). We actually have an industry that does this for us. By its very nature, due to travel of the insured, healthcare insurance is an interstate commerce, but the federal government has been reticent to adopt adequate regulation. Thus, insurers are subject only to free market and individual state politics. Current discussions seem to pit those who would create a governmental competitor in the free market, those who would get rid of the free market and accept a Medicare-like or VA-like environment, against those that would merely call for restraint from all parties.
Healthcare refers to the treatment and management of illness, and the preservation of health through services offered by the medical, dental, pharmaceutical, clinical laboratory, nursing, and allied health professions. Healthcare embraces all the goods and services designed to promote health, including “preventive, curative and palliative interventions, whether directed to individuals or to populations” (WHO report, 2000). It is not limited to discussion of hospital or doctor fees, prices of medications or insurance. While healthcare is not defined in our Constitution it is intrinsic to the pursuit of happiness. It is a right that is not limited by the Constitution or any amendment.
Primary care is care provided by physicians specifically trained for and skilled in comprehensive first contact and continuing care for persons with any undiagnosed sign, symptom, or health concern (the undifferentiated patient) not limited by problem origin (biological, behavioral, or social), organ system, or diagnosis. For many illnesses, a specialist serves as the primary-care physician.
We currently spend a lot of time and energy debating solutions to a perceived primary-care physician shortage. The concept that somehow money is both the cause of the problem and its answer seems to rank foremost in any discussion. As it does, we end up with pitting payers against those being paid, and those being paid well against those who are not. Solutions such as reducing compensation for specialists to pay primary-care physicians create a framework for restructuring our healthcare system that leads to power struggles rather than reasonable solutions.
Before 1960 the notion that there was a group of people called healthcare providers, or primary vs. secondary or tertiary physicians did not exist. These concepts have grown as medicine has become more complex (although I am sure a 1960 doctor felt that then current medicine was more complex than that which came before). As more physicians have found it necessary to spend extra time studying to master greater amounts of available information and have specialized, we have seen a greater percentage of physicians who have limited their practices to areas in which they feel more competent. Fewer and fewer physicians remain willing to be generalists. There are few physicians today that would accept the name of generalists, as this was always the name for those who had the least training. Yet these generalists were the vanguard, the initial faces that our forefathers saw first when they were ill. They were needed just like the military corpsmen starting treatment before the mobile army surgical hospital unit.
In an effort to bolster the supply of well-trained and up-to-date physicians available to the general public, modern medicine has promoted postgraduate medical education. The new generalist is highly educated, aware of advances in all specialties and often certified as part of a new specialty called family medicine or internal medicine. These doctors know more than their predecessors, and thirst for more knowledge. Many of these doctors have noted that further education in the form of fellowships made them more comfortable in a specialty, or fanned desires to venture further into research. As training positions were plentiful and supported, there was little reason not to go forward. In short, we have made a system that rewarded more education with greater honor, fees and privilege.
The well-trained primary-care physicians have been forced by insurers to become gatekeepers dealing with massive amounts of paperwork to justify their care, and use of specialists or laboratory tests. It is not uncommon for a primary-care physician's office to spend 45 minutes to get approval for an advanced test, expensive drug or even specialty referral. Overburdened by complex coding to satisfy government statistical experts, justify care to the patient and an immense amount of paperwork, fully one-third of our busiest practitioners contemplate early retirement. Currently, only 10% of the medical students that we train declare an interest in primary care, and many become hospitalists. Result: The major portion of actual primary care in the United States is provided by specialists or in hospital out patient or emergency facilities. For primary-care physicians the only remaining choice is limitation of patient visits thereby lower income, or a day that never ends, consequently, no quality of life. We must find a way to unburden the primary-care physician and permit care to be rendered by those most able to do so. In my state, with four medical schools, making an appointment with an experienced physician willing to take new patients may be difficult. Without enough physicians, preventive care will be delivered by physician extenders (nurse practitioners, physician assistants), reporting to distant physicians in a future that has already arrived for some.
Despite clamor for more primary care, we must demand evidence that annual visits and testing results in improved healthcare outcomes. To date, such evidence does not exist for low-risk patients.
The next 10 heart attack survivors or hypertensive patients coming to my office will each require different coding, that I must look up and justify. Evidence for healthcare or cost benefit for such use of physician time is lacking.
Systematic change cannot occur without alteration of rewards. If there are not enough primary-care physicians, raising payments (or lowering those of specialists) will not increase (but may help) availability. Either we increase the percentage of primary-care physicians produced by medical schools and training programs, or we import willing physicians from elsewhere. Otherwise we cannot assure consumers a primary-care physician even if they have insurance. The only way to accomplish this within our own country is by raising the bar to specialization. The cost of raising such a bar however may be excessive, as scientific advance in medicine result from specialty research.
Were we to diminish, limit or stop federal support for specialty and subspecialty programs we could partially redress the imbalance. Other alternatives might require service as primary-care physicians for those seeking subspecialty grant support, or consideration of the provision of low-level primary care by alternate providers skilled in checklist medicine. Only the ill would then get to see a physician. The danger here is that those seeing well patients would not have experience in dealing with illness, those seeing the ill, would lack contact with wellness.
In part, many healthcare definitions have been modified for economic and political gain. If a million are afflicted by a disease (defined by strict criteria), but two million people are at risk of disease (defined by less stringent criteria), we increase the perceived importance of the disease in the public eye (3 million people are more than 1 million) by changing our definition. Research in the disease would now be more likely to be funded by government or other agencies. Pharmaceutical development then proceeds more rapidly and pressure mounts for the FDA to approve new drugs. Over the counter remedies will fill the airwaves and business will increase for advertisers. Let's use diabetes as an example. There are according to the American Diabetes Association, an estimated 24 million Americans with diabetes, and one quarter of them are as yet undiagnosed. Each one million people newly diagnosed will need annual eye exams, doctor visits, blood tests at least four times a year and new medications. According to Consumer Reports, the diagnosis of type 2 diabetes will have an individual impact of approximately $6,000 per year in health related expenses. As our population becomes more obese, the cost of such a diagnosis becomes staggering. We also know that prediabetes or metabolic syndrome cause similar problems to type 2 diabetes. If we change the definition of diabetes to include prediabetes, we increase the number of involved individuals, and the cost to our nation. It is possible, but not yet proven, that there would be a health benefit from this change in definition, but what is the real cost of changing definitions?
Currently definition decisions by panels and medical societies reflect little thought of economic impact. Perhaps the Congressional Budget Office analysts will need to be involved in such decisions. The same issues have involved hypertension, Alzheimer's disease, AIDS, autism, osteoporosis and a host of other disorders. The cost of this proliferation of newly defined epidemics (diagnosis creep) is far in excess of the benefit to society. Similar examples abound in the field of infectious disease for medicines and vaccine. How much has been spent on medications neither proven to decrease the death rate from influenza, nor the acquisition rate but only demonstrated to limit duration of symptoms? We do not ask insurers to pay for medications for cold symptom relief, should they be required to pay for symptomatic relief of other non-disabling diseases?