This post is one of several previewing the book I’m writing on Universal Basic Income (UBI) experiments, and it is the second of two reviewing the five Negative Income Tax (NIT) experiments conducted by the U.S. and Canadian Government in the 1970s. This post draws heavily on my earlier work, “A Failure to Communicate: What (if anything) Can We Learn from the Negative Income Tax Experiments.”
Last week I argued that the results from the NIT experiments for various quality-of-life indicators were substantial and encouraging and that the labor-market effects implied that the policy was affordable. As promising as the results were to the researchers involved the NIT experiments, they were seriously misunderstood in the public discussion at the time. But the discussion in Congress and in the popular media displayed little understanding of the complexity. The results were spun or misunderstood and used in simplistic arguments to reject NIT or any form of guaranteed income offhand.
The experiments were of most interest to Congress and the media during the period from 1970 to 1972, when President Nixon’s Family Assistance Plan (FAP), which had some elements of an NIT, was under debate in Congress. None of the experiments were ready to release final reports at the time. Congress insisted researchers produce some kind of preliminary report, and then members of Congress criticized the report for being “premature,” which was just what the researchers had initially warned.[i]
Results of the fourth and largest experiment, SIME/DIME, were released while Congress was debating a policy proposed by President Carter, which had already moved quite a way from the NIT model. Dozens of technical reports with large amounts of data were simplified down to two statements: It decreased work effort and it supposedly increased divorce. The smallness of the work disincentive effect hardly drew any attention. Although researchers going into the experiments agreed that there would be some work disincentive effect and were pleased to find it was small enough to make the program affordable, many members of Congress and popular media commentators acted as if the mere existence of a work disincentive effect was enough to disqualify the program. The public discussion displayed little, if any, understanding that the 5%-to-7.9% difference between the control and experimental groups is not a prediction of the national response. Nonacademic articles reviewed by one of the authors[ii] showed little or no understanding that the response was expected to be much smaller as a percentage of the entire population, that it could potentially be counteracted by the availability of good jobs, or that it could be the first step necessary for workers to command higher wages and better working conditions.
The United Press International simply got the facts wrong, saying that the SIME/DIME study showed that “adults might abandon efforts to find work.” The UPI apparently did not understand the difference between increasing search time and completely abandoning the labor market. The Rocky Mountain News claimed that the NIT “saps the recipients’ desire to work.” The Seattle Times presented a relatively well-rounded understanding of the results, but despite this, simply concluded that the existence of a decline in work effort was enough to “cast doubt” on the plan. Others went even farther, saying that the existence of a work disincentive effect was enough to declare the experiments a failure. Headlines such as “Income Plan Linked to Less Work” and “Guaranteed Income Against Work Ethic” appeared in newspapers following the hearings. Only a few exceptions such as Carl Rowan for the Washington Star (1978) considered that it might be acceptable for people working in bad jobs to work less, but he could not figure out why the government would spend so much money to find out whether people work less when you pay them to stay home.[iii]
Senator Daniel Patrick Moynihan, who was one of the few social scientists in the Senate, wrote, “But were we wrong about a guaranteed income! Seemingly it is calamitous. It increases family dissolution by some 70 percent, decreases work, etc. Such is now the state of the science, and it seems to me we are honor bound to abide by it for the moment.” Senator Bill Armstrong of Colorado, mentioning only the existence of a work-disincentive effect, declared the NIT, “An acknowledged failure,” writing, “Let’s admit it, learn from it, and move on.”[iv]
Robert Spiegelman, one of the directors of SIME/DIME, defended the experiments, writing that they provided much-needed cost estimates that demonstrated the feasibility of the NIT. He said that the decline in work effort was not dramatic, and could not understand why so many commentators drew such different conclusions than the experimenters. Gary Burtless (1986) remarked, “Policymakers and policy analysts … seem far more impressed by our certainty that the effective price of redistribution is positive than they are by the equally persuasive evidence that the price is small.”[v]
This public discussion certainly displayed “a failure to communicate.” The experiments produced a great deal of useful evidence, but for by-far the greatest part, it failed to raise the level of debate either in Congress or in public forums. The literature review reveals neither supporter nor opponents who appeared to have a better understanding of the likely effects of the NIT and UBI in the discussions following the release of the results of the experiments in the 1970s.[vi]
Whatever the causes for it, an environment with a low understanding of complexity is highly vulnerable to spin with simplistic if nearly vacuous interpretation. All sides spin, but in the late 1970s NIT debate, only one side showed up. The guaranteed income movement that had been so active in the United States at the beginning of the decade had declined to the point that it was able to provide little or no counter-spin to the enormously negative discussion of the experimental results in the popular media.
Whether the low information content of the discussion in the media resulted more from spin, sensationalism, or honest misunderstanding is hard to determine. But whatever the reasons, the low-information discussion of the experimental results put the NIT (and, in hindsight, UBI by proxy) in an extremely unfavorable light, when the scientific results were mixed-to-favorable.
The scientists who presented the data are not entirely to blame for this misunderstanding. Neither can all of it be blamed on spin, sound bites, sensationalism, conscious desire to make an oversimplified judgment, or the failure of reports to do their homework. Nor can all of it be blamed on the people involved in political debates not paying sufficient attention. It is inherently easier to understand an oversimplification than it is to understand the genuine complexity that scientific research usually involves no matter how painstakingly it is presented. It may be impossible to communicate the complexities to most nonspecialists readers in the time a reasonable person to devote to the issue.
Nevertheless, everyone needs to try to do better next time. And we can do better. Results from experiments in conducted in Namibia and India in the early 2010s and late ’00s were much better understood, as resulted from Canada’s Mincome experiment that sadly did not come out until more than two decades after that experiment was concluded.
The book I’m working on is an effort to help reduce misunderstandings with future experiments. It is aimed at a wide audience because it focuses the problem of communication from specialists to non-specialists. I hope to help researchers involved in current and future experiments design and report their findings in ways that are more likely to raise the level of debate; to help researchers not involved in the experiments raise the level of discussion when they write about the findings of the experiment, to help journalists understand and report experimental findings more accurately; and to help interested citizens of all political predispositions see beyond any possible spin and media misinterpretations to the complexities of the results of this next round of experiments—whatever they turn out to be.