Monday, October 10, 2011

Income Inequality in America



(A couple of notes: This was written 4 years ago, before the big financial meltdown. Thus, some of the data and comments are inappropriate now. Still, the main points are mostly valid even today. Finally, this website does not support many of the graphics in this essay. For a more complete view, please email me for a Word copy of this essay.)

If you ask anyone, they’ll tell you that a huge problem in the United States these days is that the rich keep getting richer, while most of us find it harder and harder just to get by. Corporate CEOs make unimaginably high salaries, even if their companies lose money, while young middle class adults worry if they’ll ever be able to buy a house. Wealthy yuppies and boomers have two Mercedes per household (an SUV and a luxury sedan, thank you very much) – unless they have kids over 16, in which case they probably have a couple of BMWs too. Meanwhile, most working stiffs can barely afford gas money for their Chevy, and low income Americans have trouble just paying for their monthly bus pass. Between 1979 and 2004, real wages (adjusted for inflation) grew 63% for Americans in the top quintile (the highest 20% in terms of income), while they only increased 15% for the middle quintile and a miserly 2% for those in the bottom quintile, according to the Congressional Budget Office (CBO). The top 1% of earners now get 16% of all income, vs. only 9% of all income just a generation ago. [i] America is getting away from its past and its promise, and becoming a nation of haves and have-nots.

This is disturbing to a great many people for a variety of reasons. Boston Globe columnist James Carroll avers that we are “impoverishing more and more human beings” and “eroding democracy” by “awarding a larger share of the economic pie to the very rich.” And this (perceived) reality is in contrast to the increasing egalitarianism during most of the 20th century, as the U.S. “conformed to the standard theory of development, which held that industrialization produces fat cats at first and then a more general prosperity as workers become more productive.” [ii] Several issues are behind this disturbing trend. One is that wages have simply not kept up with inflation in the last couple of decades. Prices keep going up and up – for food, for gasoline, for housing – while wages stagnate. Why should employers give U.S. workers a raise, when they can simply move jobs over to China or India, at a third of the cost, and pocket the difference? Well, it’s important to realize that if an employer doesn’t do this, others certainly will, forcing the “nice guys” out of business and their employees out of work altogether. Congress finally raised the minimum wage, for the first time in 10 years. But $7.25 an hour isn’t even enough to live on today, let alone in 2 years when it finally gets that “high”. Meanwhile meat prices, gasoline prices, health care, and home prices are at all time highs, squeezing the middle class and devastating low income Americans. At the same time, rich kids finish college and land Wall Street or law firm jobs that start at more than $100,000 a year, while Paris Hilton earns a million bucks just for showing up at a party!

The other major factor behind America’s economic dichotomy is the tax structure. Beginning with Reagan in the 1980s, wealthy Americans benefited from a series of Republican tax cuts, even as federal deficits soared and lower income taxpayers paid as much as ever. Under the second Bush’s administration, tax breaks for the rich – in the form of lower capital gains taxes, lower income taxes, more corporate subsidies and tax loopholes, and even a huge tax rebate for anyone buying a big gas-guzzling SUV – continued to be the order of the day. While the sons and daughters of regular Americans fought and died in Bush’s senseless war, and our nation careened further and further into debt, the rich and their children, safe from the quagmire that is Iraq, got richer and richer. “Trickle-down” economics was supposed to translate tax breaks for the wealthy into prosperity for all, but instead has turned out to be nothing more than a cruel joke.

The bottom line is that for the vast majority of Americans, our standard of living has steadily gotten worse. If you’re an autoworker, a teacher, a small business owner, a soldier – you’re worse off today than you (or your parents) were 30 years ago. Unless you already own a home, good luck ever buying one! And you can probably forget about retiring before you’re 65 or 70, at least if you expect to get Social Security or want to be able to afford health care. And heaven forbid that you’re black or Latino, because then the deck is really stacked against you! The gap between rich and poor has progressively grown wider, and that’s just not a good thing.


Well, um – YES. But mostly – NO.


The table below shows what has happened to Americans’ real wages (wages after adjusting for price changes) since 1964. This is a pretty good way to gauge changes in living standards. Ceteris paribus (everything else being equal), if wages go up faster than the prices of goods and services, then we can buy more stuff and we are therefore (at least materially) better off. On the other hand, declining real wages means that we can’t buy as much as before, our standard of living is worse, and that’s not so good.

The data show that real wages trended higher until about 1972, when they began a steady decline that lasted some twenty years. By 1993, real wages (based on the price levels of the year 1967) had fallen some 24%, to $87.56 a week. Since then, real wages gradually recouped about 6% of their losses, and then have stayed pretty steady for the last few years. What’s behind these trends?

A closer look at the data shows that the biggest drops in real wages came in 1974-1975 and 1979-1980. That’s hardly surprising. The Arab oil embargoes of 1973 and 1979 sent fuel prices skyrocketing, at the same time as they sent world economies into recession – what economists call “stagflation”. Inflation approached 15% a year and real GDP fell, while the unemployment rate peaked at nearly 10% in the U.S. With jobs so scarce, workers were happy just to hang onto their jobs even without a raise, while the prices of almost everything they had to buy increased. So, the biggest hit to real wages came as a result of these geo-political disruptions between 1973 and 1980. The sources of slow declines in real wages and consumer purchasing power throughout the 1980s and into the 1990s are more difficult to identify, however.

Average hours and earnings of production and non-supervisory workers on private non-farm payrolls, Annual averages
                    Weekly  Hourly
                     Hours   Wage  Earnings    CPI *  Real Wages             * 1967 = 100
1964...........    38.5    $2.53    $97.41       92.9   $ 104.85
1970...........    37.0     3.40    125.80      116.3      108.17
1971...........    36.8     3.63    133.58      121.3      110.12
1972...........    36.9     3.90    143.91      125.3      114.85
1973...........    36.9     4.14    152.77      133.1      114.78
1974...........    36.4     4.43    161.25      147.7      109.17
1975...........    36.0     4.73    170.28      161.2      105.63   
1976...........    36.1     5.06    182.67      170.5      107.08
1977...........    35.9     5.44    195.30      181.5      107.60
1978...........    35.8     5.88    210.50      195.4      107.72
1979...........    35.6     6.34    225.70      217.4      103.82
1980...........    35.2     6.85    241.12      246.8        97.70
1981...........    35.2     7.44    261.89      272.4        96.15 
1982...........    34.7     7.87    273.09      289.1        94.47
1983...........    34.9     8.20    286.18      298.4        95.91

Real Wages, 1970-2006

$ per week

1984........... 35.1 8.49 298.00 311.1 95.79
1985...........    34.9     8.74    305.03      322.2        94.67 
1986...........    34.7     8.93    309.87      328.4        94.37
1987...........    34.7     9.14    317.16      340.4        93.17
1988...........    34.6     9.44    326.62      354.3        92.19
1989...........    34.5     9.80    338.10      371.3        91.11
1990...........    34.3    10.20   349.75      391.4        89.36
1991...........    34.1    10.52    358.51     408           87.87
1992...........    34.2    10.77    368.25     420.3        87.82
1993...........    34.3    11.05    378.89     432.7        87.56
1994...........    34.5    11.34    391.22     444           88.11
1995...........    34.3    11.65    400.07     456.5        87.64
1996...........    34.3    12.04    413.28     470           87.94
1997...........    34.5    12.51    431.86     481           89.78
1998...........    34.5    13.01    448.56     488.3        91.86
1999...........    34.3    13.49    463.15     499           92.82
2000...........    34.3    14.02    481.01     516           93.22
2001...........    34.0    14.54    493.79     530.4        93.10
2002...........    33.9    14.97    506.72     538.8        94.04
2003...........    33.7    15.37    518.06     550.3        94.14 
2004...........    33.7    15.69    529.09     564.9        93.65
2005...........    33.8    16.13    544.33     589.9        92.27
2006...........    33.9    16.76    567.90     608.8        93.28


Several world trends and national developments have been suggested. The most obvious culprit is probably the Republican administrations of Ronald Reagan (1981-1989) and George H.W. Bush (1989-1993). Their pro-business philosophies made it harder for workers and their labor unions to win wage increases: witness the air traffic controller’s strike of 1981, which resulted in thousands of controllers getting fired. [iii] Free-market policies such as deregulation and public service cuts further limited real wage gains. [iv] Amazingly, real wages reversed their downward trend shortly after (Democrat) Bill Clinton took over the Presidency in 1993, though.

But it seems unlikely that Republican practices and policies were a significant cause of declining purchasing power during this time, or that Clinton was responsible for the modest improvements that occurred on his watch. Economists of virtually every stripe agree that Presidents get too much credit when things go well, and too much blame when things go poorly. Economic success or failure, they argue, mostly occurs independently of who’s in office, as the President has much less actual influence on the economy than most people believe. Robert J. Samuelson, writing in Newsweek magazine, suggests that “The Reagan and Bush tax cuts are weak explanations, because gains have occurred in pretax incomes.” [v] Larry Summers, a noted Harvard University economist and Secretary of the U. S. Treasury during the Clinton administration, views the middle-class income gains of the Clinton years as “an aberration, caused by a combination of low oil prices and a financial bubble that made the job market unusually tight,” rather than evidence of Democrats fixing what the Republicans had broken. [vi]

Matt Bai is a nationally-acclaimed political writer for the The New York Times. Bai similarly rejects comparisons of today with the (Republican-led) America of Herbert Hoover in the 1920s, when income inequality was also so large. Based on the work of economists Thomas Piketty and Emmanuel Saez, he points out that the middle-class back then was poor by today’s standards, with an average income of $16,500 (inflation-adjusted) dollars. It’s also true, Bai contends, that what we consider poverty today is strikingly different from the poverty of only 40 years ago. When Lyndon Johnson aggressively attacked poverty with his Great Society programs, Bai reminds us, there were still rural poor with no electricity, running water, or grade-school education. Today, most of the poorest neighborhoods (as depressed as they may be) have all of those necessities and more.

Bai also concurs with Summers in rejecting Republican tax cuts and ridiculous executive salaries as a significant cause of today’s extreme income distributions, instead focusing on “a combination of technological advances and globalization…. wages for high-school graduates, who used to be able to get factory jobs, have stagnated, while highly educated workers have become increasingly valuable to companies seeking any intellectual advantage in an increasingly competitive world.” [vii] It’s worth recognizing, by the way, that The New York Times, so prominently referenced in this monograph, has the reputation of being America’s default “national newspaper” and is a self-confessed liberal publication. It is hardly a slavish supporter of Republican policies.

Others will nevertheless choose to focus on the Reagan/Bush/Bush policies as the root of increased income inequality, and they are free to do so elsewhere. But here, we’ll contend that the problem can be traced to globalization, the skills of American workers, the business cycle, and Congress – which controls government spending and taxes, after all – as well as “external shocks” such as oil embargoes, wars, and financial crises in other parts of the world. We will, however, come back to the whole “tax cuts for the rich” thing later, since that issue is so seriously misunderstood by most Americans. But for now, we need to look elsewhere for the sources of income inequality.

Several important and inter-related world trends may help explain what has happened. The most important is perhaps the tremendous improvement in technology and communication that defined the 1980s and 1990s. Over the last 30 years, personal computers went from being unknown, to clunky and expensive, to ubiquitous, cheap, and incredibly powerful. At the same time, cell phones, fax machines, the Internet, et al became just as ubiquitous, cheap, and effective. Along with all the amazing inventions, discoveries, and improvements made in the varied sciences during this time period, advances in technology and communication increased productivity and accelerated the U.S.’s transformation into a services-based economy. Economist Summers agrees that improved technology and global trade are the primary sources of stagnant American wages. Better technology and communication made it easier to produce items in other parts of the world, where labor costs (often along with taxes and government regulation) were substantially lower. Why pay Americans $10/hour plus benefits in the 1980s, when Mexicans would do the same work for $3/hour? Why pay them $15/hour plus benefits in the 1990s, when Chinese would do the same work for $2/hour? The awakening of the sleeping Chinese giant, not to mention India’s one billion citizens, many of whom speak English, has changed the entire worldwide manufacturing landscape. They, along with many millions of others in developing nations, have jumped headlong into the capitalist fray, increasing the world supply of just about every kind of product.

This has obviously put downward pressure on the prices of most manufactured goods – “the Wal-Mart effect” - but the flip side is also lower wages for American workers who have to compete not just with one another for jobs today, but with the whole world.

Meanwhile, (Democrat) Summers is in the camp of those realists who believe that there’s no way to “put the genie back in the bottle”, so to speak; globalization and amazing technologies are here to stay, and trying to stop jobs from going overseas with (typically Democratic) “protectionist measures will only create other problems such as inflation and slower economic growth.” [viii] The bottom line here is that the “non-supervisory production worker” that serves as the basis for our real wage analysis, as shown in the earlier table, has little to do with the U.S. economy of today and fosters “apples vs. oranges” analyses.

A related piece to this high-tech, global economy puzzle is the productivity of the average worker. We know we can’t beat China and those other nations on a cost per hour labor basis, but instead have placed our bets on the productivity of American workers. Well guess what? Our guys (and gals) have lost some of the edge they may once have had. China’s labor productivity grew an average of 17% a year from 1995 to 2002 compared to a 4% growth rate for the U.S. over the same time period. [ix] Over the last decade or so, U.S. GDP grew about 3% a year, while China and India both increased their production at rates closer to 10% a year. At those rates, U.S. production (GDP) doubles in about 24 years, while the Asian giants will double their output about every 7 or 8 years.

Part of the problem traces back to our schools. As an experienced teacher who has worked in the International Baccalaureate (IB) program and also scored exams from IB students around the world for a dozen years, the author has a keen understanding of what U.S. kids learn and are able to do, compared to students elsewhere. American public schools (K-12) are seen as somewhat inferior by much of the world and, in fact, a U.S. high school diploma isn’t even recognized by most universities in Europe and some other parts of the world.

One way to measure this devalued U.S. education is by looking at the number of students who earn a “7” on IB exams – the highest possible score. Worldwide, about 15-20% of test takers are awarded 7s on their exams. At the San Diego School of International Studies, an established IB school with above-average test results for an American school, fewer than 5% of students earn 7s. But there’s more: most of the kids in other countries are working in a language (English) other than their native tongue. And more still: at San Diego’s IB school, many kids opt for the easier, non-IB courses where the curriculum is considerably weaker than what their peers are getting in other countries. Meanwhile, most kids don’t even want to come to this school because its classes – even the non-IB ones – are harder than classes at other area high schools! The implications of these realities are all too clear, and they’re not good.

This type of anecdotal evidence is buttressed by the latest report on student achievement issued by the Department of Education. It shows that only 35% of high school seniors can read at an appropriate level, while a mere 23% ranked “competent” in math. This is in spite of students getting an average of 360 MORE hours of instruction than their 1990 counterparts. The extraordinarily sad fact is that American students are allowed to graduate with remedial skills in many key subjects. That’s the norm, not the exception; nearly two-thirds of all U.S. college freshmen need to take one or more remedial courses. [x] This would have been unthinkable just a couple of decades ago. Meanwhile, as too many US students are looking for (and often finding) the least rigorous way to finish high school, students in the rest of the world appreciate the value of a good education, and they’re making the most of their time in school. More and more, the result is young adults entering the work force in other countries who are bilingual or trilingual, with better math and science skills and even a better knowledge of U.S. history and government than most Americans.

This whole issue relates to the ‘returns to skill” principle. That term refers to how income correlates to years of schooling and years of work experience. In 1997, a male California worker with a bachelor’s degree earned 70% more than a similar worker with only a high school diploma, vs. only 50% more in 1969. Similarly, a male worker with 25 years experience earned 91% more than someone with 5 years experience in 1997, vs. 68% more in 1969. According to the Public Policy Institute of California (the source of data cited above), “the change in returns to education results more from falling wages for men at the bottom of the distribution than from increases for men at the top.” [xi] Rather clearly, then, more education – and, one might pointedly add, a better quality education is required to earn a decent living in today’s global economy, when compared to the sheltered American job market of 1969.

Why does this happen? Why are our future workers losing the competitive battle with their peers around the world? As implied earlier, the lack of sufficient spending on education doesn’t seem to be a major factor. This belief is supported by a report from the Organization for Economic Cooperation and Development (OECD) that indicates U.S. spending per student is generally in line with that of other advanced countries. Further, it suggests that higher spending per student does not necessarily result in higher achievement. [xii] While anecdotal evidence exists of students attending many more days of school a year in countries other than the U.S., our students actually average about 1080 total classroom hours a year, compared to 944 hours for all OECD country students.[xiii] And reference was previously made to the increase in U.S. instructional hours since 1990, with no positive impact on achievement levels. Of course, there are serious social issues that can help explain lower student success: the proliferation of one-parent and no-parent households, increased drug abuse, gang activity, English language issues for recent immigrants, etc. But the author thinks there’s a bigger issue than any of these, one touched upon earlier: that issue is one of student motivation.

Time and time again, kids do poorly in high school simply because they don’t come to class, don’t do even the easier assignments, don’t study for tests, and don’t come in for help that is available. Even knowing that their lack of effort will cause them to fail, many really don’t care. No matter how many times they get the “your future is going to be determined by how well you can compete with kids from around the world” speech, they don't seem to get it. Why? In some cases, it seems to be the idea that “I can’t get anywhere no matter how hard I try, so why bother?” In many more cases, though, it’s just a matter of being spoiled, of feeling “entitled”. A disturbingly common attitude among high-schoolers is that they live in the rich and powerful USA, “somebody” will always take care of them, and one way or another they’ll have a decent income – “so what’s the big deal? I’ll worry about it later,” many apparently think.

This is directly opposed to the attitude commonly found in – at the risk of stereotyping - Asian students, who are driven to excel in school by their families and their own sense of pride. But this is more than just a stereotype; even when compared to other OECD nations, “Europe and the United States are increasingly outperformed by countries in East Asia” in academic achievement.[xiv] A student who just recently moved to the U.S. from China (YiQiu Yu), explained it this way to me recently: “Most Chinese are still very poor, and there are only a few good jobs that will allow them to have a decent life. There are 100 good students for every 5 or 10 good jobs, and so we work very, very hard to try to be one of the few to get a good job. Chinese kids are much more serious about school than U.S. students.”

It seems, then, that American students too often fail to see the connection between their schooling today and their future success, while this relationship is much better appreciated in other countries. Why should that be the case? Why would this have changed over time and across locations? Economists typically believe that people respond to incentives: how has the incentive structure changed to lead to such behavior? The usual suspects (i.e. economic prosperity since the 1940s, with increased government support systems put in place over the decades) come to mind, but the real answer may be much more complex. These are immensely important areas of concern for the nation’s future, and so it is unfortunate that addressing them fully is beyond the scope of this monograph.

* * * * * * * * * * * *

But let’s recap here: we don’t have the high paying production jobs that high school grads and even drop-outs used to be able to fall back on because we can’t compete with Chinese and third-world wages. And increasingly, our workforce doesn’t have the skills to justify paying them more than what better skilled, better motivated workers will accept elsewhere for the higher paying technical and service sector jobs. So where does that leave the typical American worker? More and more, flipping burgers, Wal-Mart greeter, telemarketer – these “careers” come to mind.

So yes – real wages for unskilled and low-skill workers have dropped since the pre-oil embargo American heydays of big cars, cheap homes, and factory jobs. But workers’ purchasing power has shown some improvement over the last dozen years or so, and Americans are working an average of 9% less per week than they were in 1970. No – all Americans aren’t getting rich; they don’t all have the Donald Trump lifestyle to which so many aspire. But neither are they doing all that badly. 23% of Americans lived at or below the poverty level in 1975, while only 17.7% were in that same boat in 2005. [xv]

Now for a bit of non-academic reflection. Despite the complaints that regular Americans are slipping further and further behind, a look at what they actually have makes a person wonder. If you were around in the 1950s or 1960s, think back to when you were a kid. Your middle-class family probably had one car, one or two phones, and one TV in a house that averaged about 1200 sq. ft. [xvi] If you were a guy, you wore Levis and tennis shoes most of the time; you had maybe 5 pair of pants, two or three pairs of shoes, and maybe a dozen shirts. Except for the Levis and Converse tennies, name brands didn’t mean much. Going out to eat (even just to Jack in the Box) was a special treat. Portable music meant a crappy transistor radio that got the local AM stations.

Now look at things today. According to National Geographic Magazine (March, 2007), the average American home is 63% larger than 30 years ago – even though families are usually smaller now. Contrary to what one might expect, more families owned their own homes in 2004 (68.9%) than at any time since 1965, when the US Census Bureau first started tracking this statistic.[xvii] How many families now do not have a couple of TVs, a couple three cars [xviii], and half a dozen phones (including cells) today? Just about every teenager – no matter how broke his/her family is – has a cell phone. Middle and high school kids have, on average, many more pairs of shoes, pants, shirts, etc. than their contemporaries of 30 or 40 years ago, and those clothes are more likely to be expensive, name brand items. To many of them, eating at McDonald’s once or twice a day is normal, as is going to the movies each weekend. Chances are, if the kid has a cell phone, then he/she also has a $200 iPod, with its hundreds of customized tunes. And this is for virtually all middle- and many low-income kids! Entitlement is the issue, and it’s deeply entrenched in so many of our kids.

( Note: These observations are based on the author having taught teenagers at a low-income, inner-city high school for 15 years)

So it just doesn’t seem that kids today are worse off than kids of 30 or 40 years ago, at least not in material terms. But what about the populace as a whole? According to a U.S. Department of Agriculture survey, Americans in 2005 eat more fruits and vegetables, 45% more grain products, and 13 lbs. more meat per year than in 1970 [xix]. Meanwhile, total caloric intake increased by 7% for men and 22% for women from 1970 to 2000. [xx] It should be noted that while Americans eat more these days, they very often don’t eat healthier – but that also is a topic for another time and place.

Looking at another facet of American life, what do people drive these days? It seems like every third vehicle on the road is a $30,000-$50,000 luxury SUV that sucks gasoline like mad. And have you been to a mall lately? Two things about American malls these days: they’re EVERYWHERE, and they’re almost always PACKED! Think back to 30 or 40 years ago. How many people drove a big luxury vehicle? (Hint: not very many) How many stores are there today – think total square footage, either stand alone, or in malls - compared to back then? Two, three times as many? More than that? Now check the U.S. census figures: there were 200 million Americans forty years ago, and just over 300 million now - an increase of about 50%. 50% more people, but 100 or 200% more store footage; what's wrong with this picture? [xxi

But where in the world do all these people get all this money to buy all the crap that they’re constantly buying at all these stores, and which they haul around in their mobile houses? A large part of the answer to that question is debt. Just as Americans have become much more materialistic, they’ve also become more willing to take on debt, to max out their credit cards in seeking to accumulate as much stuff as they possibly can. And they’ve put themselves into a very bad situation, such that any small hiccup in their finances can force them to default on their payments and perhaps declare bankruptcy.

According to the Federal Reserve, debt in 2003 averaged $18,654 per household, a figure that doesn’t include mortgage debt. That number has risen more than 41% since 1998. Counting mortgage debt, Americans are an average of $38,851 in debt per person (not per household) as of 2005. Total household debt is now 110% of national income, double the ratio in 1975. In other words, over the last 30 years, Americans have gone into debt twice as fast as the economy itself has grown.[xxii] And of course, better educated, better skilled, higher-income Americans are able to take on more debt than lower-income Americans, and this serves to exacerbate the standard of living gap between the two groups. [xxiii]

Yet despite this increased debt, one area where Americans’ living standards have declined is health care. Per capita health care spending increased by 156% from 1980 to 1990, while spending from 1990 to 2000 increased by “only” half that amount - 71% [xxiv] In 2006, approximately 61% of employers provided health coverage, down 8% from 2000. Over that same time period, premiums have increased 87%, according to the Kaiser Family Foundation. [xxv]. So Americans are worse off when it comes to health care, not to mention their health itself; eating at McDonalds twice a day and then plopping your butt in your big SUV instead of walking will do that, you know. An unkind cynic might point out that we need better health care because we don’t care for our bodies very well, yet we can’t afford it because we spend all our money on fast food, gasoline, and payments for big cars…

But even if Americans are “worse off when it comes to health care”, it is also true that employers have been paying grossly higher amounts of their employees’ health care costs than before. Health care accounted for 16% of GDP in 2004, compared to only 7.2% of GDP in 1970. Businesses paid $448 billion for their employees’ health care in 2004, up some $100 billion from just four years earlier! ( That extra $100 billion in non-wage benefits that employees earned doesn’t show up in the income numbers, even though it is a real cost to employers and one reason why employee wages have been unable to rise faster. (Incidentally, this extra cost is also one more reason why jobs have left the U.S. to go overseas.)

In fact, wages are now just under 81% of the average employee’s total compensation, compared to 91.7% back in 1965. According to the U.S. Census Bureau, business spending on employee health care has risen from just over 1% of total compensation in 1965 to over 7% in 2006, while other non-wage compensation (such as retirement benefits) have gone up from 7% to 12% over the same time period.

In other words: looking at wages alone doesn’t give the whole picture, and we should acknowledge that American workers have also been “paid” in ways that don’t show up on a pay stub. We can and we should adjust Americans’ wages to reflect this, so that we are closer to comparing apples and apples over time. Taking our earlier numbers of average weekly wages (inflation adjusted) from page 5, and factoring in non-wage compensation, it turns out that the average worker in 2006 earned $115.65 per week, compared to about $114.50 in 1965. That’s a real improvement over what the numbers on page 5 showed, and remember that the 2006 worker earned this slightly higher income while working about 4 hours less per week than his 1965 counterpart.

Let’s re-focus again: Americans’ wages have not kept up with inflation over the last 35 years, although for the last 15 or so they have, especially if you factor in the shorter work weeks of today. Most of the failure to better keep up with inflation can be traced back to the wake-up calls OPEC gave the world in 1973 and 1979. Other factors range from pro-business government policies, to globalization, improvements in technology and communication, to complacency on the part of our young people and failures in our education system. Yet Americans seem to be living as well as ever, if not better overall, with perhaps the most notable exceptions and areas of concern being health care and personal debt. But finally, when we look at total compensation rather than just wages alone, it turns out that the average American worker actually has done a pretty good job of keeping up with inflation over the last four decades.


Nevertheless, the wealthiest Americans have been doing much better than the rest of us over the years, and the media makes sure we see, hear, and read about that all the time. Sports figures, CEOs, movie stars, billionaires and professional “celebrities” are only the most obvious tip of the iceberg when it comes to the luxurious lifestyles of the top 5% of Americans, in terms of income.

Income is often measured by comparing how “quintiles”, or fifths of the total population, are doing compared with each other. For example, how is the highest quintile (the 20% of the population with the highest incomes) doing compared to the lowest quintile (those whose income is in the bottom 20% of Americans)? There are a couple of key considerations we need to consider before looking at that kind of data. The first is how these income distribution numbers are calculated. Daniel Weintraub, writing for the Sacramento Bee (February 1, 2006), points out that “breaking income distribution into fifths and comparing them over time…sets up an unfair statistical fight. The bottom fifth (or quintile) will always include some people who are unemployed, recently retired, or out of the work force, making nothing or very little, while the top fifth includes the most economically successful people in society, most of whom have been working, full time, for many years.” “The bottom tier”, Weintraub continues, “will always tend to pull the average down, while the top tier has no limit on how high it can reach. With these parameters, a widening gap is almost a mathematical certainty over time” – especially during an extended economic boom, such as the one we’ve seen (with few interruptions) from the mid-1980s to the present. .

Perhaps more importantly, Weintraub adds, the people in the bottom quintile change over time. So as the income gap widens, it’s not that the same people are worse off than they were 20 or 30 years ago. Especially in places like California, Texas, and Florida, with high rates of immigration, the bottom fifth is constantly repopulated with new residents who arrive with little education or skills, while former immigrants move up into the higher income groups. In 1969, immigrants counted for 10% of California’s overall male workforce and nearly 15% of the workers in the 3 lowest wage groups. By 1997, the share of immigrants in the male workforce grew to 36%, with the majority of these workers in the lowest wage categories. This leads the Public Policy Institute of California to conclude that the state’s high concentration of recent immigrants “has contributed strongly to rising income inequality.” [xxvi] Some take it even further, saying that “a central factor in the growing disparity between the ‘haves’ and the ‘have nots’ is mass immigration”. [xxvii]

Well, countries such as Canada and Sweden have immigrant percentages similar to the U.S., relative to their total populations, without the extreme income inequality. Perhaps their immigrants are better educated and skilled than those coming to the U.S. But in either case, a detailed examination of immigration trends is beyond the scope of this paper, so let’s just say that immigration may be a significant factor here.

And in any event this is not to say, as some who cite statistics for anti-immigration purposes do, that immigrants are bad, nor that they are destined to stay poor. As the American experience has shown time and time again, immigrants typically come here with little education, speaking little or no English, and take the lowest paying jobs. Whether we’re talking about my great-grandfather who spoke only Yiddish and “peddled notions” door to door, or a Mexican who picks lettuce, a Somali who drives a cab, or a Vietnamese who alters clothing, immigrants typically start off at the bottom. But twenty years later, they often own a little store, a landscaping business, or their own cab company and their kids are going to college. Forty years later, they’re retired in middle-class comfort while their children have professional careers and their grandkids barely speak the ancestral language. This is how America works, and so people continue to come here by the millions. Starting off at the bottom, so very far behind those in the upper income levels, is OK because they know that they (or their kids) are hardly doomed to staying there.

Another reality is that people in the top quintile (or top 10% or top 1%) getting wealthier doesn’t necessarily come at the cost of keeping low-income people down. Roger Lowenstein of The New York Times reminds us that “whether Roger Clemens…earns 100 times or 200 times what I earn is kind of irrelevant. My kids still have health care, and they go to decent schools. It’s not the rich people pulling away at the top who are the problem…” It’s the people stuck in the lower income levels who are the problem, and as appealing as blaming the rich for this may be, there’s little evidence to support such a view. Economists James Heckman and Alan Krueger, in their 2004 book Inequality in America, agree that the key to narrowing the gap between rich and poor “is all about raising the incomes of people at the bottom. Punishing those at the top doesn’t help.” [xxviii]

So just how much stock should we put in all the numbers that scream “Inequality!” And is America’s increased income inequality, in the final analysis, really such a bad thing?

Critics of America’s income inequality point to the Lorenz Curve (on the following page) to illustrate our worsened situation. In a land of perfect equality, the red and turquoise lines would be right on top of one another, such that 20% of the households earn 20% of a nation’s income, 40% of them earn 40% of the income, etc. In other words: a small percentage of rich people wouldn’t get a disproportionately high percentage of the total income. The U.S. is criticized because its (red) Lorenz curve has been moving further and further away from the (turquoise) line of perfect equality. But again, it’s not clear that a shallow Lorenz Curve (red line close to the turquoise) is such a good thing.


Places like Sweden are probably too equal, argues Lowenstein and many economists. There is a “rough trade-off between equality and growth”, he writes. “If you try too hard to make everyone equal, you get fewer entrepreneurs…and a lower standard of living.” Labor economist Richard Freeman, who is generally pro-union, says Sweden’s poorly differentiated pay scales (an attempt at greater income equality) led to unemployment and deficits in the early 1990s, at which time the country moved to a more market-led system. [xxix] Sweden’s GDP growth rates trailed the U.S. and E.U. nations in the 1970s, 1980s, and early-90s, but then turned significantly higher starting in 1993 and has remained so until the present time once it backed away a bit from seeking more equal incomes for all. [xxx] Aside from lower interest rates and a devalued krona, a number of free market reforms – particularly deregulation and reduced marginal tax rates - implemented between 1991 and 1994 helped the Swedish economy become more competitive and, therefore, increased its growth rate while lowering unemployment. [xxxi]

Samuelson, writing in Newsweek reminds us that inequality, up to a point, is both inevitable and desirable. It is precisely the prospect of doing well that “encourages people to work hard, develop new skills and take risks.” And unlike what many Americans might think, “Most of today’s rich have earned – not inherited – their status. Among the top 1%, more than four-fifths of their income comes from salaries and self-employment.” “The poor aren’t poor because the rich are richer”, Samuelson says, echoing what most economists believe; rather, “Their poverty reflects low skills, poor work habits or bad luck.” [xxxii]

The Sacramento Bee’s Daniel Weintraub identifies more flaws in comparing income changes among groups of people, specifically in this case, income quintiles. As he explained about the recent data from the Economic Policy Institute, and the Center on Budget and Policy Priorities in 2006 “The poorest fifth of families in Oklahoma and Florida, for instance, both earned about $15,400 after taxes (in 2002). But because the top fifth of families in Florida earned about $117,000 each while the top fifth in Oklahoma took home $97,700, Florida is ranked with a more unequal distribution of income than Oklahoma. Does that mean it is better to be poor in Oklahoma than in Florida?” Of course not, we should conclude. Poor Floridians are no worse off, and yet statistically they have a chance to become wealthier than their Oklahoma counterparts.

Another perversity of income distribution analysis is obvious by the following situation (also using data from 2002). The poorest fifth of California families had incomes of about $16,800 while the top fifth had incomes of about $127,500. Consider two possible scenarios for five years in the future. In the first, the poorest quintile barely rises, to $17,000, while the richest fifth creeps up to $135,000. In the second scenario, the poorest families climb to $20,000, but the richest families zoom up to $200,000. Inequality would be far greater in the second example, but yet it would clearly be better for both rich and poor families alike! Wealth is not a zero-sum game, since there is no limited amount of income available, requiring one group’s gains to come at the expense of another group. The New York Times’s Lowenstein agrees, pointing out that “the millions made by hedge-fund traders or by people who create companies like Google, don’t take away from other people’s wages. Indeed, each helps to make the pie bigger.” [xxxiii].

However, economists are quick to point out that these possible statistical failures have always existed in measuring incomes, while we’ve only recently experienced these wide income disparities. So it’s not enough just to find fault with how we measure quintiles, expanding pies, etc. It is a fact that back in the “good old days” income was more evenly distributed and the wealthy paid a much higher rate of tax than they do today, even with these same statistical challenges. Emmanuel Saez, economist at UC Berkeley, points out that the share of income going to the rich was stable to lower for about 50 years beginning in the 1920s. The top marginal tax rate in the 1950s topped out at 91%, and as late as 1980, it was 70% - compared to 35% in 2003.

Yet one statistical limitation that has also always been with us is that incomes are often measured by households or families, not per individual. But with today’s smaller families, this should mean that money goes a little farther, per person. According to the U.S. Census Bureau, the average family had 1.82 children in 2006, down from 2.09 kids in 1975 and 2.44 in 1965. Another way to see this trend is that the average family size was 3.7 people in 1964, 3.42 people in 1974, but only 3.14 in 2000. We can use this information to translate household income into per capita income, which is actually more useful. (

Let’s go back to page five’s data and calculate that the 1964 worker’s $104.85 weekly wage translated into $28.34 per person in his household. We figure that by simply dividing $104.85 by the 3.7 souls (average, at that time) per family. Using the same procedure, 1974’s worker had $31.92 per capita, while the average 2000 family earned $29.69 per capita. On a per capita basis, then, incomes (adjusted for inflation) declined only 7% from 1974 to 2000, compared to a 14.6% drop in the raw data. And while real incomes were down some 11% from 1964 to 2000, they were actually higher in 2000 when taking family size into account. We won’t even consider that most families have two wage earners today, versus only one 3 or 4 decades ago, making per capita income per household much higher now. That would introduce too many qualitative issues and subjective judgments, and is therefore better left to the sociologists. Suffice it to say that single-earner families aren’t nearly as bad off as we might have otherwise thought, once we look at income per family member.

But at any rate before we get too nostalgic for the good ol’ days of the 1970s, when a single-income factory job could support the average Joe’s family and the rich paid their fair share, let’s remember what went wrong in that decade. The country fell into the worst recession, with the highest unemployment rates, since the Great Depression of the 1930s. But unlike the 1930s when prices actually declined, we were also experiencing the worst inflation since just after WW II, and the highest interest rates of the century! It was an ugly time that (future) Federal Reserve Board chief Alan Greenspan famously referred to as “the Great Malaise.” The good news was that we were all more or less in the same boat, but the bad news was that the boat was sinking!


1980 saw the departure of Jimmy Carter, and heavy skepticism about the Keynesian economics that had guided government policy since the 1930s. It brought instead Ronald Reagan and the supply-side policies of Arthur Laffer. Now everybody knows that those failed policies were little more than a give away to the rich. Problem is: that’s not really true, and we’ll take a look at that shortly.

But for now, let’s concede that while high tax rates discourage investment and growth, rates may have gone too far in the other direction since 1980; tax rates on the rich could stand being raised a bit. Newsweek’s Samuelson avers that tax cuts on capital gains and dividends during the Bush administration probably weren’t needed as an incentive for investment. A modest increase in those and marginal income taxes may well increase federal revenues and reduce class antagonism without significantly harming economic growth and jobs creation. Meanwhile, the loopholes and deductions that the wealthy employ to reduce their taxes continue to exacerbate income inequality in the U.S., as Summers and other economists maintain. One suspects that higher taxes on the wealthy would not be out of line, although just how much taxes could be raised without slowing our economic engine is unclear.

Complicating the whole issue of raising taxes on the rich are the unexpected realities that the rich already pay a huge chunk of all taxes collected, and that lowering tax rates on the wealthy may actually increase federal tax receipts rather than lower them at times. The IRS claims that in 2004 the top one percent of taxpayers paid 36.9% of all federal income taxes, [xxxiv] while Newsweek’s Samuelson adds that this small minority paid 25% of all federal taxes. The N.Y. Times Magazine’s Lowenstein similarly says that the richest 10% pay 70% of federal income taxes and 52% of all taxes.

Others claim that when you look at all of the various federal and state taxes, then the rich don’t pay such a large share. Kevin Hassett, of the American Enterprise Institute, estimates that “a family of four earning $50,000 pays exactly the same share of its income (30 percent) on taxes as one earning $150,000.” [xxxv] Hmmm… let’s call that the “glass half-empty” view, since if they both pay 30% of their income, doesn’t that mean that the richer of the two families pays three times as much, in terms of hard dollars going out the door? So either the wealthy pay a huge portion of the nation’s taxes, or they merely pay several times more than what normal folks pay - depending on what statistics you look at. But whatever you might think about the rich getting too rich, it’s hard to make a case that they don't pay much in taxes.

Meanwhile, 42.5 million Americans who filed a tax return in 2004 had no tax liability; in other words, they paid zero income tax. That was up from 32 million from just four years ago, incidentally. Add to that the 15 million Americans who weren’t even required to file a tax return. Then factor in how many dependents were included in this 57.5 million households that earned money but paid no income tax, and it turns out that about 120 million Americans (or 40% of the nation’s population) paid no income tax whatsoever. [xxxvi] These 120 million would include all of the “poor” along with a chunk of what might be considered “lower-middle income” Americans. Taken together, this raises questions about the “conventional wisdom” that “the rich get tax breaks while the poor and middle class pay more.”

What’s just as interesting is how tax rates and tax revenues have changed over the years in the U.S. and elsewhere. According to former Congressman (and avowed supply-sider) Jack Kemp, “in 1986, when the tax rate on capital gains was increased from 20% to 28%...Congressional revenue estimators hugely overestimated capital gain revenues.” Not only did the government’s tax revenues not go up as much as estimated, but “they actually declined.” The Congressional Budget Office (CBO) “overestimated capital gains realizations by $527 billion between 1989 and 1992.” Being a bit slow to learn from the CBO’s mistake, Congress’s Joint Committee on Taxation (JCT) projected that cutting the capital gains rate back to 20% from 28% in 1997 would reduce revenues by $21 billion. Instead, the lower tax rate increased revenues by $38 billion! [xxxvii] More recently, the politically-neutral CBO reported that the 2003 tax cuts, so vilified as a handout to the rich and a source of our huge deficits, actually raised tax revenues by $137 billion while lowering the federal deficit by $53 billion. Clearly, cutting tax rates can give the government more total tax revenue, while raising tax rates can lower total revenues - very counter-intuitive results. [xxxviii]

So what’s going on here? Well, this has to do with failing to take into account how people behave, or what economists refer to as the static vs. dynamic nature of tax revenues. With lower tax rates on their profits, investors are more willing to take risks, profits rise accordingly, and tax receipts on those profits – even at the lower rate – go up. Economics writer Bruce Bartlett provides more evidence of this phenomenon. In 1980, he writes, the top 1% of taxpayers paid 70% of their income to the federal government; and 19.3% of all individual federal income taxes (which totaled $246 billion) came from these folks. But the top rate was cut to 50% the next year, characterized by those on the left as “a massive give-away to the wealthy”. By 1986, however, the top 1% were paying 25.7% of all federal income taxes (which had now risen to $349 billion). [xxxix] That year, the top rate was slashed further to 28% (prompting even more liberal complaints). And yet by 1992 income taxes paid by individuals were way up (to $481 billion), now with 27.5% of income taxes being paid by the top 1%. [xl]

One could show this graphically on a diagram that plots maximum tax rate on one axis and the percentage of all income taxes paid by the wealthiest 1% (or total income taxes received) on the other axis (shown below). Of course, there were other things going on between 1980 and 1992 that help explain why the rich paid more taxes and why overall income taxes rose substantially even as tax rates dropped precipitously. The economy was in transition, the Cold War was ending, and technology was booming, among other factors. Between 1980 and 1992 GDP increased by 126%, while tax revenues only

increased 96%; could tax revenues have risen even more if rates hadn’t been cut in 1981 and 1986? Supply-side tax cuts don’t deserve all the credit.

As fair-minded economists urge us to recognize, other factors played a role in these amazing results. Yet it’s hard to deny the trend here: let people keep more of their earnings, and the increased incentive to earn may result in more money for the government. There’s nothing particularly fanciful about this concept. Certainly, tax cuts have resulted in the rich paying a greater share of income taxes. This presents politicians with a real conundrum: knowing that advocating “tax cuts for the rich” is a good way to commit political suicide, how do you deal with the fact that they may actually benefit everyone in the long run?

Economists show this phenomenon of tax cuts raising government revenues via the Laffer Curve, named after the godfather of supply-side economics, Dr. Arthur Laffer. Characterized as “voodoo economics” by George H.W. Bush in 1980, and much maligned by politicians and the press in subsequent years, the fact is that the Laffer Curve rather neatly illustrates a basic truism: after a certain point, raising tax rates reduces government tax revenues while lowering them can do the opposite. A zero tax rate creates zero revenue for the government, of course, but so does a 100% tax rate (nobody will work if they have to pay everything they earn to the government). Note the part of the curve to the right of point “T”; it slopes downward, looking very much like the twin graphs shown earlier. We know from the data that lower tax rates can increase total revenue and also the percentage of total taxes that the rich pay; the Laffer Curve is basically a modified and extended version of our earlier graphs.

The trick, of course, is to determine what the point of maximum revenues (T) is. Is it 70%? 50%? 28%? Something in between? Or a rate less than 28%? For that, there is no magic formula and misjudging that maximum point was one reason that U.S. supply-side policies in the 1980s were judged a failure. Furthermore, and in contrast to some of the conclusions suggested earlier, most economists now believe that “the positive effect of tax cuts on tax revenues is only feasible for very high tax rates, not the rates currently observed in most industrialized countries.” They point out the dangers of emphasizing “the Laffer curve story and how tax cuts pay for themselves. This view is not credible in the economics profession”. [xlii] Caveat acknowledged, but still: not valid at all? The evidence suggests otherwise. Finally, even as tax cuts may (or may not) have increased the total dollar amount of taxes paid by the wealthy, and may (or may not) have increased government revenues, it does seem that they allowed the rich to benefit more than other groups and, thus, did increase income inequality. So while some benefits of tax cuts seem obvious in light of the evidence, it would nevertheless seem prudent to question whether they are always the right answer, and have no downside to them.


Going on a bit of a tangent here, the seriously misunderstood fact is that supply-side policies as a whole were quite successful in the 1980s and 1990s, both in the U.S. and elsewhere. It is a fact that federal budget deficits soared during the Reagan years, when supply-side policies were supposed to create the opposite result. 1980s deficits dwarfed those of the 1970s; furthermore, they led to the dangerous “deficits don’t matter” mentality that has resulted in continued and even larger deficits. Budget deficits ranged from $120 billion to $220 billion during the Reagan years, compared to deficits in the $5 billion to $80 billion range in the 1970s. Deficits soared further during the First Bush administration, reaching nearly $300 billion in 1992. After the deficits steadily declined (and even went to surpluses) under President Clinton, they proceeded to skyrocket again under the Second Bush in the 2000s, with deficits approaching $500 billion even before the latest recession. In itself, this production of huge budget deficits alone – an admittedly distressing development - is seen as conclusive proof of the bankruptcy of supply-side policies.

But let’s first backtrack a bit and clarify what “supply-side policies” are. The term refers to a variety of actions that government can take in order to increase a nation’s aggregate (or total) supply of goods and services (or “stuff”). Making more stuff causes all kinds of good things to happen; most particularly, both inflation and unemployment rates go down, as implied in this diagram. The green line represents a nation’s total demand for all goods and services, from hamburgers and cars and iPods, to haircuts to acupuncture sessions to Lady Gaga tickets – and everything else we want to buy. AS stands for Aggregate Supply, or the total amount of all the burgers and cars and iPods and haircuts and pin pricking and tickets and everything else that sellers are willing and able to supply to the nation. The picture is clear enough: when AS moves to the left, we have less stuff (or output, or GDP) and prices rise. On the contrary, AS moving to the right, as shown by the big arrow, means that we have more stuff AND lower prices. The fancy technical term for this is: “a win/win situation”. Few people question the benefits that come from increasing aggregate supply; many, however, doubt the ability of the specific policies that have claimed to do so.

This whole supply-side idea attracted people’s attention towards the end of the 1970s, a decade which, as mentioned earlier, brought recessions (less stuff, higher unemployment) and higher prices. That’s known as stagflation, or in technical terms “a lose-lose situation.” Demand-side policies, which included altering government spending and taxation (fiscal policy) and altering money supply and interest rates (monetary policy) had failed to solve the decade’s problems, which can be neatly visualized in the following diagram (note: economists love diagrams!). As shown, average inflation rates had

climbed from about 4% in the 1960s to about 8% in the ‘70s and early ‘80s, while unemployment rates rose from about 5% to around 7-8% at the same time (in other words, from line PC1 to PC3).

Demand-side policies weren’t cutting it, and so the Reagan administration thought supply-side deserved a shot. But just how DO you encourage an economy to produce more stuff? Well, there are a dozen or so specific supply-side “levers” which will (or can) help do the trick. In approximate order of importance, they include:

  • Tax cuts, but specifically reductions in marginal tax rates and capital-gains tax rates. These (as opposed to cuts in sales taxes, excise taxes, or one-time tax rebates) encourage people to work harder, businesses to invest more in existing businesses, and entrepreneurs to start new businesses since they get to keep a bigger share of their added income, with the end result of more stuff being produced.
  • Deregulation or the reduction of government interference in business. The fewer the rules, regulations, bureaucracy, fees, etc., the easier it is to conduct business and make more stuff.
  • Free trade, by definition allows more stuff into the country at cheaper prices, which of course gives us more stuff at cheaper prices.
  • Better training and education, which leads to a more skilled and therefore more productive workforce that then can make more and better stuff.
  • Improving infrastructure such as roads, airports, electrical grids, phone systems, schools, and all the other things that are required for a productive, growing economy trying to make more stuff.
  • Encourage saving, which then creates a bigger pool of money available for investment, leading – ceteris paribus – to lower interest rates and therefore more investment in factories, machinery, etc., and ultimately more stuff being produced.
  • Reduce discrimination, especially in the workforce. The idea here is that you want the most qualified person doing each job, whether they’re male, female, black, white, gay, straight, or whatever. With jobs going to whoever can do them best, society ends up with…..more and/or better stuff.

Now sometimes these levers can conflict with one another, or with other goals that are important to society. For example, cutting taxes can leave the government with less money for improving infrastructure or education. Or, deregulation might make it harder for government to reduce discrimination in the workplace. Similarly, free trade can cause job losses among domestic workers, while too much deregulation can reduce critical consumer safeguards, such as a weakened FDA that can’t adequately monitor Chinese imports. Applying supply-side measures can be tricky business indeed, which brings us (finally!) back to the whole budget deficits thing.

Even though the CBO data referenced earlier shows an increase in personal income tax revenues between 1980 and 1986, and 1986 to 1992, government deficits soared between 1980 and 1992. As the following diagram shows, tax receipts dipped in the early-‘80s recession, but then rebounded nicely thereafter. However, government spending never did dip, as federal outlays continually outstripped rising federal receipts during the 1980s. Let’s cut to the chase: who’s to blame for these monstrous deficits? [xliii] The answer depends on who you listen to (does this surprise you?).

Critics of the Reagan administration – including a top former administration insider insist that Reagan and his crew knew all along that tax cuts would result in huge deficits. But they went ahead with them anyway, lying to the public that the Laffer Curve showed tax cuts would bring higher revenues and, thus, no deficits. Others claim that, whether the Laffer Curve was misinterpreted or not, Reagan was blameless in the whole deficits issue.

rdiAccording to Bob Packwood, chairman of the Senate Finance Committee in the 1980s: “The Reagan tax, he got a bum rap for it.” “In July of 1981, all of the budget projections showed us having immense budget surpluses by 1985! The CBO, the OMB, the JCT, and most of the private places showed anywhere from $160 billion to $250 billion surpluses.” Reagan believed that if Congress didn’t return those surpluses to the taxpayers, Congress would have simply increased their spending even more. “No one foresaw…that the recession was coming, which caused our revenues to drop. No one foresaw the rapid drop of inflation because the tax code was not then indexed. We could count on getting a 1.7% increase in revenues for each 1% of inflation. And we were running 13%, 14%, 15% inflation and projecting that out until 1985, so no wonder we thought we were going to have surpluses! Then inflation fell to 5.5% and the recession came.” Packwood continues: “In 1982, however, when we were in the recession and we knew we had to narrow the deficit, Congress made a promise to Reagan: accept $1 in tax increases and we’ll give you $3 in spending decreases. He signed the tax bill. He never got the spending cuts (that Congress promised).” [xliv]

So - Reagan and his boys were “blameless” in this deficits matter? Somehow that doesn’t sound quite right either, especially considering that Packwood, like Reagan, was a Republican. One suspects that the truth lies somewhere in the middle. We also need to keep in mind, however, that not everything Reagan did was “by the book”, supply-side wise. He did, after all, come into office with a solid mandate to do something about the USSR and communism. Reagan made it clear that he wanted the U.S. to step up the pressure on the “Evil Empire” and end the Cold War, and people voted him in with this understanding. With all his campaigning against government spending, the one clear exception was in the area of defense spending, which rose

significantly on his watch. As this chart shows, defense spending didn’t rise all that much relative to our total economy, and in fact remained at historically modest levels in the 1980s. And we’ll leave it to others to decide once and for all whether Reagan’s increased defense spending was the straw that finally broke the USSR’s back, or whether the USSR would have died of internal decay anyway. What we know for sure is that increased defense spending was a major factor in the Reagan deficits, and also that this was not a supply-side policy.

Finally, it is beyond the scope of this extract to posit the precise degree that Reagan and supply-side policies, vs. Congress and unforeseeable circumstances, were culpable for the 1980s deficits. Here we will choose the middle road: probably supply-side policies were partly to blame for the deficits, but almost certainly not as much as is generally believed.

It is also true that policies like the 1981 Tax Reform Act, which allowed more Americans to save via tax-advantaged retirement accounts, failed to increase the overall savings rate - another supposed failure of supply-side policies. According to Nouriel Roubini, “The private saving rate continued to decline slowly in the 1980s. In the 1973-1980, private saving averaged 7.8 percent of the economy, and dropped to 6.9% in 1986 and 4.8% in 1989. In other words, the saving rate was significantly lower after the 1981 tax cut than before it.” [xlv] One might suppose that the decade’s acceleration into mindless consumerism (which continues even today) overpowered the new incentives to save, but that is just speculation. At any rate, the bottom line is that all of the supply-side policies did not produce the intended results. Yet taken together, there is overwhelming evidence that the policies were successful. Here we will consider three powerful pieces of evidence to support this view.

First is the story of what happened in Delaware between 1979 and 1988, when the state’s “top income tax rate was reduced to 7.7% from 19.8%” After the rate was cut, “personal income tax revenue doubled and employment increased 38%. At the same time, the lowest-income 11% of taxpayers were removed from the tax rolls.” Also, Delaware’s “welfare caseload fell by 40%; the state’s bond rating (in 1979 the lowest in the nation) rose six times, and the unemployment rate fell to two percentage points below the national average, having begun two percentage points above it.” Wow! But there’s more. “Since 1980”, former Delaware governor Pete DuPont wrote, “employment in Delaware’s private sector has grown 67% faster than that of the average state and its personal income advanced 22% more rapidly that the U.S. average. In 1992, Delaware’s poverty rate was the lowest of all 50 states.” [xlvi] Wow again!

Behind all of this good news were of course the tax cuts, but also state spending “restraints” – read: they slashed wasteful spending – along with deregulation of the financial services industry and bi-partisan support of the total package. (Republicans and Democrats working together; hard to imagine today!) Probably, Delaware benefited from factors other than its adoption of supply-side policies and an inexplicable case of political goodwill, such as its close proximity to New York City and who knows what else. Nevertheless, its 1980s case study earns it poster-state stature for supply-siders.

Our second piece of evidence is as simple as it is unbiased. Go back to the Phillips Curve of a few pages ago, and look at what happened to inflation rates and unemployment rates in the U.S. from 1981 to 1992. Those rates peaked about the time that Reagan and supply-side policies entered the national picture. The Phillips Curve stayed high for the first couple of years of Reagan’s administration, which is hardly surprising. Economic policies, if they’re going to work at all, usually take several months to several years to be approved, implemented, and then do their thing. By 1984, the economy was clearly heading in the right direction, and continued to do so for many years thereafter, even through the Clinton years. Clinton, of course, is generally credited for getting the economy back on course after the 1992 recession and ultimately returning the nation to budget surpluses. But point of clarification: a great many people who knew what was going on at the time don’t exactly remember it that way.

Even before the election of 1992, the Bush administration and the Federal Reserve had already put the policy changes into place that would end the recession. But as they say, timing is everything, and the mild 1991-1992 recession ended just after Bill Clinton beat George H.W. Bush (who lost largely on the basis of the weak economy) in November. This was before Clinton had a chance to take any economic actions whatsoever. Clinton was a good President, with amazing skills in a variety of areas. But he surely didn’t end the recession, and his economic policies consisted mostly of not doing anything that would derail the booming economy he inherited from his predecessors.

As Lawrence Kudlow and Stephen Moore wrote in 2000, when the U.S. broke the record for the longest business cycle expansion in its history (previous record: 106 months) “America’s economic turnaround started in the early 1980s, a decade before Bill Clinton arrived in Washington. It was Reagan’s supply-side economic ideas…which unleashed a great wave of entrepreneurial-technological innovation that transformed and restructured the economy, resulting in a long boom of prosperity.” Clinton, they agree “deserves credit for keeping the expansion moving. Along with Robert Rubin, his policies extended disinflation (low inflation), creating a countrywide tax cut effect that offset his mistaken 1993 tax increase. Free-trade measures during the mid-1990s also constituted a tax-cut stimulus effect.”[xlvii] Budget surpluses that came in Clinton’s second term might similarly be seen more as a result of the continued economic growth that earlier policies had made possible, rather than by any specific actions that Clinton can claim to have taken. He was President at a good time in our country’s history. Given the choice between being smart and being lucky, lucky often looks better. Clinton had the advantage of being both, but in any case, the 1981-1992 economic turnaround in the U.S. hints solidly at the efficacy of supply-side policies.

The third and final item comes from perhaps the most credible of sources. Robert Lucas won the Nobel Price in Economics in 1995, and wrote the following that year: “When I finished graduate school in 1963, I believed that the single most desirable change in the U.S. tax structure would be the taxation of capital gains as ordinary income.” In other words, investors (the rich) should pay tax rates on the profits of their investments, or so a young Lucas believed. Lucas continued “I now believe that neither capital gains nor any of the income from capital should be taxed at all.” Gee, that’s a pretty big turn around. But let him continue….

“I have called this paper an analytical review of ‘supply-side economics’, a term associated in the United States with extravagant claims about the effects of change in the tax structure on capital accumulation. Under what I view as conservative assumptions, I estimated that eliminating capital income taxation would increase capital stock by about 35%.” Now here comes the punch line:

“The supply side economists, if that is the right term for those whose research we have been discussing, have delivered the largest free lunch that I have seen in 25 years of this business, and I believe we would be a better society if we followed their advice. The attraction of (supply-side economics) is not that it is pretty – though it can be – but that, given half a chance, it works.” [xlviii]

In a field where the bedrock principle is “trade-offs” – everything has a cost and there’s no such thing as a “free lunch” – Lucas’s comments are nothing less than astonishing. And remember: his aren’t the opinions of a Republican governor, a newspaper hack, or a high school economics teacher, but rather someone who was recognized by the Nobel committee for his work on the topic.

Certainly in all cases there were other important developments that prevent supply-side economics from taking full credit. Indeed, critics claim that these other developments (end of the Cold War, lower commodity prices, the explosion in technology, advances in communication, improved monetary policy by the Fed, etc.) deserve the credit more than supply-side policies do. These factors unquestionably aided in reducing both unemployment and inflation in the 1980s and 1990s. On the other hand, it is also true that most of those developments can be traced back to, or were at least aided by, supply-side policies. And so it is: economists and politicians are fully capable of arguing over most anything. But when all’s said and done, the smart money ought to go with Robert Lucas – he of the Nobel Prize in Economics. Supply-side policies may have provided the closest thing the real world has to a genuinely free lunch.

* * * * * * * * * * * * *


Income inequality exists in the U.S. and has grown in recent decades.

But it’s not nearly as bad as folks would have you believe. The rich are much better off today than they were 30 or 40 years ago. But most poor Americans are better off today than poor Americans were back then, and the same can be said about most middle-class Americans, even if their gains have lagged behind those of the rich. The familiar pie analogy helps. Virtually every American has a bigger piece of pie today than in the past. The rich have a (much) bigger share of the pie, but since the pie itself has expanded dramatically, their bigger share hasn’t come at the expense of others. Some of the statistics cited as showing the opposite are unrealistic, comparing apples to oranges.

While the “rich are getting richer”, those in lower income categories are hardly confined to staying there. The dynamics of the U.S. economy are such that people are able to rise up from lower to higher income levels; yesterday’s low income Americans are often today’s middle-class Americans. Similarly, while the Rockefellers, Kennedys, Hiltons, et al are living larger today than they ever were, a large chunk of today’s wealthy Americans worked their way to the top fairly recently. As a result, we end up with crass Donald Trumps, grossly overpaid CEOs, and $15 million-a-year baseball players. But that’s a small cost for the opportunities available to all, rather the equivalent of the price we grudgingly pay (e.g. Nazi party parades and filthy hip-hop music) for freedom of speech in this country.

There are reasons why the gains for lower income Americans haven’t kept up with those of the wealthy, and they really don’t have that much to do with tax cuts for the rich. Wealthy Americans pay much more tax in absolute dollar terms than Americans of other income levels. The much bigger issue is that low and middle-income Americans live in a world that is vastly more inter-connected today than in the 1970s, and modern realities deny them the good-paying jobs that were formerly available even to the poorly educated, almost as an American birthright. The Americans that economist Richard Freeman called overeducated in 1976 “are plainly undereducated today. Only about a third of the population graduates from college. Among the poor, there has been only a very slight increase in college-graduation rates.”[xlix] Foreign workers are often better educated, have a stronger work ethic, and will work for less than Americans. Jobs go to the “highest bidder” these days, and more and more, that’s not Americans. It’s no using ruing the trend or pining for the past, because it is what it is and we’re not going back any time soon.

The answer, as most who understand the issues see it, is not to penalize the successful for becoming wealthy. In more cases than not, they got where they are fair and square. As Gary Becker, a Nobel-winning economist states, “common sense tells you that a small increase in taxes when rates are relatively low, as they are now, isn’t going to curb people’s animal spirits. Higher taxes in and of themselves, however, won’t cure inequality.” Freeman concurs, adding that “if you’re worried about inequality, it’s hard to see any alternative (to providing Americans a better education). Hamburger flippers simply don’t command a high wage. We can pass laws to change that — a minimum price for cheeseburgers, maybe — or we can, finally, invest in teaching the flippers to do something else.” [l] So fine – let us raise the tax rates that the Wall Street crowd pays, and raise the top income tax rates that the exorbitantly wealthy pay. That will make everybody feel better, and may even help with the deficits. But that’s not going to solve the whole problem, nor even a significant part of it.

The answer, as the author sees it, also has to include some sea change in attitudes among Americans, especially young Americans. Until the sense of entitlement, the apathy, the ideas of “give me” and “what’s the least I need to do?” that are too commonplace in the U.S. today change, the pressures that keep lower income Americans down won’t change much.

Income inequality exists in the U.S. and has grown in recent decades.

But it’s not nearly as bad as folks would have you believe. Déjà vu! But this time we mean that income inequality isn’t such a bad thing. The “perfect world” that the Lorenz Curve seeks to depict is a world of lower national growth and less opportunity for individual advancement. The increased inequality in the U.S. has its roots in the nation’s tradition of individual independence and self-sufficiency. “Leave me alone and let me see what I can do!” is a particularly American sentiment, one that is not always shared or fully understood by our European cousins and others around the world.

The debate on the inequality issue might best be characterized by those on one side who want the U.S. to be a place where everyone is guaranteed a good living, no matter what. On the other side are those who say it should be a place where everyone is guaranteed a fair shot at having a good living. There’s a big difference. Virtually every American under our current system gets enough to eat, a place to sleep, a decent education, and opportunities to make something of themselves. With smart choices, hard work, and just a little luck, they have a good chance for a rosy future. On the other hand, if they’re lazy, make bad choices, and/or waste their time in school, then society shouldn’t be held to account for their failure or the fact that they can’t make ends meet on a minimum wage job. Yes, some folks just can’t catch a break despite doing everything right. We should all want to help them as much as possible, but they are the exception rather than the rule.

By the way, let’s don’t even bother with the whole “not all kids get the same education or have the same home life” argument. While that is certainly true, it almost doesn’t even matter. Again, as a veteran inner-city high school teacher, the author has seen countless kids from low-income homes, from non-English speaking homes, kids whose parents are addicts, in jail or dead, kids who are literally living on the streets, kids who go to low-performing schools. And these kids – if they want, if they try, if they’re motivated - they make it. They end up at UCLA, at Harvard, or at lesser universities, but they make it. Take, as one example, Fabian Núñez, born of immigrant Mexican parents who lived in Tijuana before moving to a low-income neighborhood near downtown San Diego in the 1970s. Núñez, who would surely protest many of the conclusions of this paper, was until recently Speaker of the State Assembly in California. Núñez, and folks who share his political views, may well argue for a U.S. with more equality, while missing the larger point that it is the competitive, relatively unsheltered nature of the place that is largely responsible for so many success stories like his own.

America’s income inequality, aside from reflecting our national character, has also increased as a result of the supply-side mentality that has dominated U.S. politics for most of the past 25 years. If that sounds like an accusation, it’s not. The preponderance of evidence shows that supply-side policies have benefited Americans – some more than others, of course. But the main features of America’s economy for two and a half decades beginning in 1983 were relatively stable prices, low unemployment, and an improved standard of living for virtually everyone. We take those for granted now, forgetting what things were really like in the 1970s, and are therefore free to find fault in less critical issues such as increased inequality.

Then there’s the stock market; up more than 1000% since Ronald Reagan first took office, a clear sign of the nation’s healthy economy. Yet another boon for the wealthy? Again, yes, but also NO. 401(k) retirement plans alone, created as part of the Reagan revolution in 1981, now contain over $2 trillion dollars, much of which is invested in stocks by some 45 million average Joes and Janes.[li] And don’t forget all the hundreds of billions in IRAs, 403(b), 457(b), and SEP retirement plans, not to mention the countless billions in individual investment accounts. Americans’ investment wealth has soared in the last couple of decades, and not just for rich Americans. More than half of all Americans now own stock either directly, or indirectly through retirement plans and mutual funds. [lii]

Robert J. Samuelson, this time writing in his 2001 book Untruth, points out that between 1989 and 1995, stock ownership rose from 33% to 48% for middle-income families, while nearly doubling (from 13% to 25%) among low-income families. Americans’ greatest source of wealth has traditionally been their homes; real estate net worth was more than twice the value of stock wealth in 1990. But by 1997, stock wealth was 30% greater than that in real estate.

Trickle down economics did work and in all probability will continue to work – for those who are smart enough and hard-working enough to deal with a changing, competitive, global economy. Yes, it is harder now for a lot of folks in this new environment, but taxing the rich won’t bring back the good ol’ days and neither will playing ostrich and trying to hide from the rest of the world. That world sees our American Dream clearly and constantly in this modern age of global communication, technology, and entertainment. They’re hungry to have our lifestyle for themselves, and they’re going to take it from those among us who are foolish enough to count on anyone other than themselves to preserve their standard of living. Income is unequally distributed; it’s a fact of life. And mostly, that’s not so bad.

Jon S. Strebler

October 17, 2007

1 comment:

Domi Tartar said...

Income Inequality is definitely a problem faced in America today, but it is not always a bad thing. We have been made to think that it is better for the rich to give to the poor, and have everyone be equal. There is no set amount of money, and the rich aren't taking from the lower classes. The reason the lower classes are in this predicament is because of their lack of skill, not because of rich people.