America’s healthcare debate began in the late 19th century, during the Industrial Revolution, when mill jobs led to workplace injuries and labor unions began advocating for sickness and injury protections.
In 1929, a unique product was introduced: individual prepayment for hospital care, offered originally for teachers by a company called Blue Cross. Another company, Blue Shield, offered a similar program to lumber and mining workers for reimbursable physician’s services. Employees paid monthly fees to groups of physicians for guaranteed healthcare.
By the 1940s about 9 percent of the American population had health insurance. But following World War II, a unique situation emerged. Veterans returning from war flooded the labor markets. As a means to control inflation and limit wage increases, Congress passed the Stabilization Act of 1942, prohibiting U.S. businesses from offering higher salaries to compete for labor. As a result, companies began looking for more creative ways to recruit employees and incentivize existing employees for stay. The solution was the foundation of employer-sponsored health insurance as we know it today. Employees enjoyed this fringe benefit because they did not have to pay taxes on this form of compensation, and they were able to insure their families.
By 1950, more than 50 percent of Americans had some form of health coverage through their employers. By 1960, more than two-thirds did. But vulnerable groups of people were left out: retirees, the unemployed, the disabled, and those whose employers could not afford to offer coverage.
Two programs were signed into law by President Johnson to address this disparity: the Medicare program, providing medical insurance for people age 65 and older; and Medicaid, offering health coverage to low-income people. Today, these programs cover approximately 118 million Americans.
From “A House Divided: Engaging the Issues through the Politics of Compassion” by Mark Feldmeir – Chalice Press