Do we need more scientists?

Fall 2003

By Michael S. Teitelbaum

For much of the past two decades, predictions of an impending shortage of scientists and engineers in America have gained increasingly wide currency. The country is failing to produce scientists and engineers in numbers sufficient to fulfill its economic potential, the argument runs. The supposed causes are weaknesses in elementary, secondary, or higher education, inadequate financing of the fields, declining interest in science and engineering among American students, or some combination of these. Thus it is said that the United States must import students, scientists, and engineers from abroad to fill universities and work in the private sector-though even this talent pool may dry up eventually as more foreign nationals find attractive opportunities elsewhere.

Yet alongside such arguments – sometimes in the very same publications in which they appear – one learns of layoffs of tens of thousands of scientists and engineers in the computer, telecommunications, and aerospace industries, of the deep frustration and even anger felt by newly minted Ph.D.s unable to find stable employment in traditional science and engineering career paths, and of senior scientists and engineers who are advising undergraduates against pursuing careers in their own fields. Why the contradictory reports on professions routinely deemed critical to the success of the American economy? Is it possible that there really is no shortage in these fields?

A history of gloomy forecasts

Pronouncements of shortages in American science and engineering have a long history. They date at least to the late 1950s, around the time the USSR launched Sputnik, the first orbiting satellite, prompting concerns that an era of Soviet technological advantage over the United States had emerged. The United States responded with massive public investments in science and engineering education. This led to sharp increases in the numbers pursuing such studies, and a surfeit in the 1970s of entry-level scientists and engineers.

The recent history of shortage forecasts begins in the mid 1980s, when the then-leadership of the National Science Foundation (NSF) and a few top research universities began to predict “looming shortfalls” of scientists and engineers in the next two decades. Their arguments were based upon quite simplistic demographic projections produced by a small policy office reporting to the NSF director-projections that earlier had been sharply criticized by the NSF’s own science and engineering workforce experts.

Only a few years later, it became apparent that the trends actually pointed toward a growing surplus of scientists and engineers. In 1992, the House Committee on Science, Space and Technology’s Subcommittee on Investigations and Oversight conducted a formal investigation and hearing about the shortfall projections, leading to much embarrassment at the NSF. In his opening remarks at the hearing, the subcommittee’s Chairman, Democrat Howard Wolpe of Michigan, declared that the “credibility of the [National Science] Foundation is seriously damaged when it is so careless about its own product.” Sherwood Boehlert, the subcommittee’s ranking Republican and now chair of the full House Science Committee, called the NSF director’s shortfall predictions “the equivalent to shouting ‘Fire’ in a crowded theater.” They were “based on very tenuous data and analysis. In short, a mistake was made,” he said. “Let’s figure out how to avoid similar mistakes, and then move on.”

Boehlert’s advice was not heeded. Only five years later, during the high-tech boom of the late 1990s, an industry association known as the Information Technology Association of America (ITAA) began to produce a series of reports asserting burgeoning gaps and shortages of information-technology workers, based on proprietary surveys of what it termed “job openings.” The first ITAA report claimed that some 190,000 information-technology jobs could not be filled in 1997. The second concluded that there were 346,000 open positions in 1998. The Department of Commerce then produced its own report, which drew heavily upon the findings of the two ITAA reports.

The General Accounting Office (GAO) published a sharply critical assessment of these three related reports in 1998. It concluded that all of their shortfall estimates were questionable due to the studies’ weak methodologies and very low response rates. Unabashed, ITAA returned to the fray in 2000. Its third report asserted that over 843,000 information-technology positions would go unfilled that year due to a shortfall of qualified workers. Despite withering criticism from the GAO, the ITAA reports provided useful political support for the successful lobbying campaign for dramatic expansion – to the current level of 195,000 per year – of the H-1B visa, the temporary-visa program for foreign “specialty workers” that comprise the bulk of foreign science and engineering professionals being admitted to work in the United States.

Remarkably, even the recent economic downturn does not seem to have deterred proponents of the workforce shortage theory. Take NASA administrator Sean O’Keefe, who invoked a shortage argument during testimony before the House Science Committee in October 2002 on NASA’s hiring problems. “Throughout the Federal government, as well as the private sector, the challenge faced by a lack of scientists and engineers is real and is growing by the day,” O’Keefe told the committee.

The following month a new organization called Building Engineering and Science Talent (BEST) published a report entitled “The Quiet Crisis: Falling Short in Producing American Scientific and Technical Talent.” This “quiet crisis,” the report’s authors noted, “stems from the gap between the nation’s growing need for scientists, engineers, and other technically skilled workers and its production of them…. This ‘gap’ represents a shortfall in our national scientific and technical capabilities.”

Some business leaders and academics are also advancing the shortage thesis despite the economic downturn. Two reports with findings similar to the BEST study subsequently emerged in the spring of 2003. One was a report addressed to the Government-University-Industry Research Roundtable (GUIRR) of the National Academies, and the other was prepared by the Committee for Economic Development (CED), an organization of business and education leaders.

Even some associated with the NSF seem unchastened by the embarrassing failure of the “shortfall” projections of a decade ago. In June 2003, the National Science Board, the NSF’s governing body, released for public comment a draft task-force report addressing the “unfolding crisis” in science and engineering. “Current trends of supply and demand for [science and engineering] skills in the workplace indicate problems that may seriously threaten our long-term prosperity, national security, and quality of life,” it said.

The evidence

The profound irony of many such claims is the disjuncture between practice in the scientific and engineering professions “in which accurate empirical evidence and careful analyses are essential” and that among promoters of “shortage” claims in the public sphere, where the analytical rigor is often, to be kind, quite weak. Few, if any, of the market indicators signaling shortages exist. Strong upward pressure on real wages and low unemployment rates relative to other education-intensive professions are two such indicators conspicuously absent from the contemporary marketplace.

A RAND study released earlier this year assembled the available data from its own research, the NSF, the Census Bureau, the Bureau of Labor Statistics (BLS), the National Research Council (NRC), and several scientific associations. What RAND found largely discredits the case being made for labor shortages. First, RAND noted the obsolescence of the available data, the newest of which refers mostly to 1999 or 2000. RAND called this “especially unfortunate” given that “the [science and engineering] workforce situation has arguably changed significantly” since those heady times of the dot-com, information technology, and telecom booms. But more importantly, RAND’s analysis of even data from the boom period showed that “neither earnings patterns nor unemployment patterns indicate [a science and engineering] shortage in the data we were able to find.”

Recent government unemployment data tend to confirm these findings. Data for the first and second quarters of 2003 released by the Bureau of Labor Statistics showed surprisingly high unemployment rates in science and engineering fields. Even the recently “hot” computer and mathematical occupations are experiencing unemployment of 5.4 to 6 percent. For computer programmers, the numbers range from 6.7 to 7.5 percent. All engineering (and architecture) occupations taken together are averaging 4.4 percent unemployment, while the rates for the high-tech fields of electrical and electronic engineering are in the range of 6.4 to 7 percent. Reported unemployment in the life, physical, and social sciences ranges from 2.8 to 4.1 percent. Many of these numbers are remarkably high for such high-skill occupations. Unemployment for the whole of the U.S. workforce averaged about 6 percent over the same period, and highly educated groups such as scientists and engineers normally have substantially lower unemployment rates than the national average.

In the natural-science disciplines, which employ far fewer people than engineering, numerous reports by leading scientists have been pointing to increasingly unattractive career prospects for newly minted Ph.D.s. As one example among many, a 1998 National Academy of Sciences (NAS) committee on careers in the life sciences-the largest field in the natural sciences-reported that “recent trends in employment opportunities suggest that the attractiveness to young people of careers in life-science research is declining.” More recent data from 2002 showed that key indicators of career problems had continued to deteriorate since then, prompting Shirley Tilghman, the NAS committee’s chair and current president of Princeton University, to tell Science magazine that she found the 2002 data “appalling.” She said the data reviewed earlier by the committee looked “bad” at the time, “but compared to today, they actually look pretty good.” The 2003 RAND study concurred. “Altogether, the data … do not portray the kind of vigorous employment and earnings prospects that would be expected to draw increasing numbers of bright and informed young people into [science and engineering] fields,” RAND concluded.

It is of course quite possible to have “appalling” early career problems in some areas of science and engineering alongside very good career prospects in others. Administrators of federal technical agencies such as NASA do face special problems such as hiring freezes or other ongoing personnel or financial constraints. Senior personnel at NASA and other agencies have been offered substantial early retirement incentives while hiring procedures to replace them tend to be cumbersome and slow. In “hot” fields that are new or growing rapidly, like bioinformatics, human resources are inevitably in short supply. And truly exceptional scientists and engineers will always be few in number and vigorously pursued by employers.

Still, in most areas of science and engineering at present, the available data show sufficient numbers or even surpluses of highly qualified candidates with extensive postgraduate education. This is especially the case in the academy, which has become risk-averse about replacing departing tenured faculty with tenure-track junior positions. Instead, many universities in the United States have been filling such open slots with temporary and part-time appointees they find in ample pools of highly educated applicants. Indeed, advertisements for a single tenure-track assistant professorship often attract hundreds of applications from recent Ph.D.s. Similar circumstances prevail for engineers and scientists in large sectors of the U.S. economy such as telecommunications, computing, and software, sectors in which lurching market collapses and large bankruptcies have greatly weakened demand for their services.

What does the future hold?

Many recent shortage claims point not to current circumstances, but to projections of future demand. What can be said with reasonable assurance about such predictions?

Unfortunately, labor-market projections for scientists and engineers that go more than a few years into the future are notoriously difficult to make. An expert workshop convened by the National Research Council in 2000 reported universal dissatisfaction with past projection efforts, and stated declaratively that “accurate forecasts have not been produced.”

The workshop report commented in particular upon one such study that is often cited by shortage proponents: the Bureau of Labor Statistics’ “Occupational Outlook.” The most recent “outlook,” completed in 2001, projected that over the next decade computer-related fields, including software engineers, computer-network and systems administrators and analysts, would likely be the fastest growing occupations nationwide. But the NRC workshop report noted the limitations inherent in such projections:

The omission of behavioral responses makes the BLS outlook unreliable as a basis for decisions on federal funding designed to respond to anticipated shortages…. The BLS outlook neglects many dimensions in which adjustment may occur, including training and retraining, and especially in response to changes in wages…. No response is built into time trends in relative occupational wages on either the demand side (where employers substitute capital for labor when relative wages rise) or the supply side (where students move toward occupations in which relative wages are rising).
One might add that many science and engineering fields are heavily influenced by federal funding, which makes projections of future workforce demand dependent upon quite unpredictable political decisions and world events. To their credit, the authors of the BLS Occupational Outlook themselves emphasize the need for caution. “The BLS projections were completed prior to the tragic events of September 11 … [and] the nature and severity of longer-term impacts [of the terror attacks] remains unclear,” the authors write. “At this time, it is impossible to know how individual industries or occupations may be affected over the next decade.”

Owing to such events and unforeseeable changes in the market, no one can know what the U.S. economy and its science and technology sectors will look like in 2010. It follows that no credible projections of future “shortages” exist on which to base sensible policy responses.

Misdirected solutions

Not only are claims of current or future shortages inconsistent with all available quantitative evidence, but alas many of the solutions proposed to deal with the putative ” crisis” are profoundly misdirected. The most popular proposed solutions seem to focus mainly on the supply side, urging action to increase the numbers of U.S. students pursuing degrees in science and engineering. Recommendations often include calls for reform of the U.S. elementary and secondary education systems, especially inadequacies in science and mathematics; informational efforts to promote knowledge of such careers among U.S. secondary school students and of the science and math prerequisites required to pursue them at university level; financial and other incentives to increase interest in such fields among U.S. students; and increases in the number of “role models” in science and engineering fields for women and underrepresented minorities. Other commentators, apparently more pessimistic about the abilities of U.S. students, recommend increasing the numbers of students or workers from abroad to meet the needs of the U.S. economy.

This focus on supply to the virtual exclusion of considering demand is not warranted. However desirable many of these proposals may be on other grounds, they are unlikely to be very effective in attracting U.S. students to careers in science and engineering unless employment in these fields is sufficiently attractive to justify the large personal investments needed to enter them. Surprisingly enough, it is far from common to hear this rather obvious point raised in public discussions of the subject. To put the matter more succinctly, those who are concerned about whether the production of U.S. scientists and engineers is sufficient for national needs must pay serious attention to whether careers in science and engineering are attractive relative to other career opportunities available to American students. And yet little such attention has been forthcoming in recent years.

The qualifications for careers in engineering and especially in science involve considerable personal investments. The direct financial costs of higher education in the sciences can be staggering, depending on the financial circumstances of undergraduates and their families, whether the institution is private or public, whether postbaccalaureate education is required, and whether such education is subsidized.

Engineering and science differ substantially in these characteristics. For engineering, only the baccalaureate is normally required for entry into the profession. Most engineering B.S. degrees are earned at state universities, which are heavily subsidized by state governments. In addition, direct financial aid is often available for those in financial need. In contrast, professional careers in the sciences now commonly require completion of the Ph.D. and increasingly require subsequent postdoctoral work. The direct financial costs of this extensive graduate and postdoctoral work are typically heavily subsidized by both government and universities. Yet even with such subsidies, the personal costs to qualify as a scientist can be quite high-mainly due to the lengthening time required to attain the degree and complete postdoctoral training.

The extreme case is that of the biosciences, which account for half of all Ph.D.s awarded in the natural sciences. Over the past couple of decades, the average period of required postbaccalaureate study has increased dramatically, to between nine and twelve years from about seven to eight years. The Ph.D. itself has stretched out to seven or eight years from about six, while the now-essential postdoctoral apprenticeship has lengthened to between two and five years from one or two in decades past. In career terms, this means that most young bioscientists cannot begin their careers as full-fledged professionals until they are in their early thirties or older, and those in academic positions usually are not able to secure the stable employment that comes with tenure until their late thirties. Unsurprisingly, the idea of spending nine to twelve years in postbaccalaureate studies before one is qualified for a real job may be unattractive to substantial numbers of would-be young scientists.

There are also concerns about negative impacts on scientific creativity. Wendy Baldwin, until recently the deputy director for extramural research at the National Institutes of Health (NIH), notes concerns arising at NIH over “the long-held observation that a lot of people who do stunning work do it early in their careers.” Bruce Alberts, in his 2003 President’s Address to the National Academy of Sciences, described as “incredible” the fact that even though NIH funding has doubled in only the past five years, the average age of first-time grant recipients has continued increasing. “Many of my colleagues and I were awarded our first independent funding when we were under 30 years old … [now] almost no one finds it possible to start an independent scientific career under the age of 35,” Alberts told the academy. Nobel laureate and co-discoverer of DNA structure James Watson agrees. As he put it in characteristically pithy terms in a 1992 interview, “I think you’re unlikely to make an impact unless you get into a really important lab at a young age…. People used to be kings when they were nineteen, generals. Now you’re supposed to wait until you’re relatively senile.”

It’s not hard to see why this also portends ill for science careers at a personal level. Delaying career initiation until one’s thirties poses inherent conflicts with marriage and family life. Many who might be attracted to careers in science are justifiably concerned that such a career choice comes at too high a personal cost.

The problem has not gone unnoticed. Many scientific societies have decried the trend toward longer degrees and postdoctoral apprenticeships, and U.S. universities have created more than 70 new two-year graduate science degrees designed for those who wish to pursue scientific careers outside of the academy. (Start-up costs of many of these have been supported by Sloan Foundation grants.) These new degrees, called “Professional Science Master’s degrees,” have been attracting interest among U.S. science majors who might otherwise choose paths leading to business or law school.

Opportunity costs

Some senior scientists stress that no one should pursue a science career to get rich, which is a point well taken. Yet it would be unwise simply to ignore how alternative career paths compare in strictly economic terms. The nine-to-twelve year period that a would-be bioscientist now must spend in a student role or low-paid postdoctoral position means that a substantial fraction of lifetime income that would otherwise be earned must be foregone. This is what economists term “opportunity costs,” and these are by no means insignificant. A 2001 study conducted by a team of leading economists and biologists for the American Society for Cell Biology found that bioscientists experience a “huge lifetime economic disadvantage” on the order of $400,000 in earnings discounted at 3 percent compared to Ph.D. fields such as engineering, and about $1 million in lifetime earnings compared with medicine. When expected lifetime earnings of bioscientists are compared with those of M.B.A. recipients from the same university, the study’s conservative estimates indicate a lifetime earnings differential of $1 million exclusive of stock options. When stock options are included, the differential doubles to $2 million.

In smaller scientific fields such as physics and chemistry, where Ph.D. programs are shorter and lengthy postdoctoral work less universal, the differentials are smaller but still substantial. Given the direct financial costs and opportunity costs, careers in science and engineering must offer significant attractions relative to other career paths available to American students. College graduates with demonstrated talent and interest in science and mathematics can choose to go to medical school, law school, or business school; they can pursue other professional education; or they can enter the workforce without graduate degrees.

The options available to most foreign students-at least for those from poorer countries-are completely different. Most do not have the option to study at U.S. medical, law, or business schools (due to the high costs and lack of subsidies) nor can they easily enter the U.S. workforce directly. In contrast, science Ph.D. programs at many American universities actively recruit and subsidize graduate students and postdoctoral fellows from China, India, and elsewhere, from which positions many are able to move on to employment in the United States.

There are, of course, many significant noneconomic rewards associated with careers in science and engineering: the intellectual challenge of research and discovery; the life of the mind in which fundamental puzzles of nature and the cosmos can be addressed; the potential to develop exciting and useful new technologies. For some, these attractions make science and engineering careers worthy of real sacrifices-they are “callings” rather than careers, analogous to those of religious or artistic vocations. Happily, a number of talented students will decide, based on personal values and commitments, to pursue graduate degrees and careers in science or engineering, even with full knowledge that the career paths may be unattractive in relative terms. Yet it is also true that others with strong scientific and mathematical talents will decide that a better course for their lives would be to go directly into the workforce rather than to follow additional scientific studies, or instead to pursue professional degrees in business, law, or other fields.

The politics of shortages

Public discourse about these issues is mired in paradox. There are energetic claims of “shortages” of engineers, while unemployment rates are high and mid-career engineers face increasing job instability. There are reprises of earlier “shortage” claims about scientists, while undergraduates demonstrating high potential in science and math increasingly seem to be attracted to other careers. Some emphasize the need for K-12 reform, even though very large numbers of entering college freshmen intend to major in science or engineering but later choose otherwise. The NIH research budget has doubled within only a few years, but the average age at which scientists win their first research grants is rising. Why are shortage claims so persistent despite so much evidence to the contrary?

On this issue, where one stands depends upon where one sits. Most of the assertions of current or impending shortages, gaps, or shortfalls have originated from four sources: university administrators and associations; government agencies that finance basic and applied research; corporate employers of scientists and engineers and their associations; and immigration lawyers and their associations.

The economist Eric Weinstein has uncovered documentary evidence suggesting that the real intent of some of those involved in the 1980s “shortfall” alarms from NSF may have been to limit wage increases for Ph.D. scientists. Whether or not such motivations underlay that episode, we can certainly appreciate the various incentives that may currently spur some to endorse such claims. Universities want to fill their classrooms with undergraduates who pay their fees and finance their research with external funding, and to do so recruit graduate students and postdoctoral fellows to teach undergraduates and to staff their research laboratories. Government science-funding agencies may find rising wages problematic insofar as they result in increased costs for research. Meanwhile, companies want to hire employees with appropriate skills and backgrounds at remuneration rates that allow them to compete with other firms that recruit lower-wage employees from less affluent countries. If company recruiters find large numbers of foreign students in U.S. graduate science and engineering programs, they feel they should be able to hire such noncitizens without large costs or lengthy delays. Finally, immigration lawyers want to increase demand for their billable services, and especially demand from the more lucrative clients such as would-be employers of skilled foreign workers.

None of these groups is seeking to do harm to anyone. Each finds itself operating in response to incentives that are not entirely of its own making. But a broad commonality of interests exists among these disparate groups in propagating the idea of a “shortage” of native-born scientists and engineers. Moreover, claims of shortages in these fields are attractive because they have proven to be effective tools to gain support from American politicians and corporate leaders, few of whom would claim to be experts on labor markets. As noted earlier, the dubious reports from the ITAA were used successfully to convince the Congress to triple the size of the H-1B visa program in 2000. In late 2002, a leading lobbyist for the National Association of Manufacturers, responding to criticism that shortage claims cannot be supported by credible evidence, put the matter succinctly: “We can’t drop our best selling point to corporations,” he explained.

Such a short-term view is naturally attractive to lobbyists because it works politically. But it may turn out to be a serious error over a longer period of time. Claims of impending shortages can easily become self-fulfilling prophecies if, as in the past, government responds by subsidizing education or increasing visas for foreign workers without seriously considering the effects such actions may have upon the attractiveness and sustainability of career paths for such professionals. Action along these lines could create an even larger surfeit of scientists and engineers-one that drives down the number of Americans willing to enter these professions and, paradoxically, creates the very problem it seeks to address.

Instead of raising the false flag of shortages, those concerned about the future of science and engineering in the United States should encourage objective appraisals of current career paths, as well as innovations in higher and continuing education designed for more agile adjustments to inevitable changes in these dynamic fields. The overarching goal should be to find ways to make these careers attractive relative to the alternatives, for this is the only sustainable way to ensure a supply commensurate with the United States’ science and engineering needs.

Copyright of The Public Interest, No. 153 (Fall 2003), pp. 40-53 © 2003 by National Affairs, Inc.
Michael S. Teitelbaum is program director at the Alfred P. Sloan Foundation and co-author of Political Demography, Demographic Engineering (Berghahn Books, 2001).

Breaking the drug-crime link

Summer 2003

By David Boyum & Mark A. R. Kleiman

The American criminal justice system now spends a significant proportion of its resources enforcing the drug laws. More than 10 percent of all arrests and about 20 percent of all incarcerations involve drug law violations. (Most of the 1.5 million annual drug arrests are for simple possession, while the majority of the 325,000 people behind bars on drug charges are there for dealing.) Drug-related arrests are up 50 percent over the past 10 years, and drug-related incarceration is up 80 percent. And the burden of drug law enforcement falls especially on urban minority communities: Will Brownsberger and Anne Morrison Piehl of Harvard found that the poorest neighborhoods in Massachusetts, with a little more than 10 percent of the state’s population, accounted for 57 percent of state prison commitments for drug offenses, while Peter Reuter and his colleagues at RAND estimated that nearly a third of African-American males born in the District of Columbia in the 1960s were charged with selling drugs between the ages of 18 and 24.

Such vigorous enforcement of drug prohibition, while controversial, enjoys substantial support. This is partly because drug laws are seen as protecting people -especially, but not exclusively, children – from drug abuse and addiction. But it is also because drug prohibition and enforcement are widely believed to prevent burglary, robbery, assault, and other predatory crime, a view apparently borne out by the violence that surrounds much drug dealing and the high rates of drug use among active criminals. Because drug trafficking is inherently violent and because illicit drug use is a catalyst for criminal behavior, the argument goes, enforcement efforts to suppress drug selling and drug taking will tend to reduce crime.

But advocates of drug legalization and many other critics of current drug control efforts argue the opposite. They say that drug policy, and not drug abuse, is principally responsible for the observed relationship between drugs and crime. Drug laws and their enforcement make illicit drugs more expensive. Higher drug prices increase nondrug crime because many heavy users commit crimes to finance their habits. Violent crime among dealers is even more obviously attributable to drug prohibition; when alcohol was an illicit drug, alcohol dealers settled their differences with firearms, just as cocaine dealers do today. But two liquor store owners are now no more likely to shoot one another than are two taxi drivers. Eliminate the drug laws, it is said, and most drug-related crime will also disappear.

Each of these views has an element of truth. By creating black markets, prohibition can cause crime. But so too can intoxication and addiction, which would increase if drugs were freely available. Further complicating these links between drugs and crime is the fact that the pharmacological effects of intoxication and addiction, the patterns of use, and the economics of buying and selling differ greatly across drugs. It would not be surprising if some policies were crime-reducing when applied to, say, methamphetamine but crime-increasing with respect to marijuana. And vice-versa.

Thus, the question that supporters and opponents of drug prohibition endlessly debate, “Do drugs, or drug laws, cause crime?” presents a false dichotomy. The answer to both halves of the question is “Yes.” In this essay, we will try to answer a more productive question: What drug policies would work best to minimize predatory crime? The question is difficult to answer, given the complexity of the problem and the uncertainty surrounding key factual questions. A reasonable first step is to review the empirical evidence about the potential links between drugs and such predatory crimes as theft and assault.

The drugs-crime connections

The three most important causal links between drugs and crime are the behavioral effects of drug use, the urgent need of addicts for money to feed their habits, and the side-effects of illicit markets. We will examine each in turn.

Intoxication and addiction, in certain circumstances, appear to encourage careless and combative behavior. The key empirical observation here is that more crimes – and, in particular, more violent crimes – are committed under the influence of alcohol than under the influence of all illegal drugs combined. When state and federal prisoners were asked about the circumstances of the offenses that landed them in prison, 24 percent said they were under the influence of illicit drugs (but not alcohol) at the time, 30 percent cited intoxication with alcohol alone, and 17 percent named drugs and alcohol together. That alcohol, a legal and inexpensive drug, is implicated in so much crime suggests that substance abuse itself, and not just economic motivation or the perverse effects of illicit markets, can play a significant role in crime.

This connection is hardly surprising. Anything that weakens self-control and reduces foresight is likely to increase lawbreaking activities. Most crime doesn’t pay, and being high is one good way to forget that fact. (Driving drunk, for example, rarely stands up to cost-benefit analysis from the drunk driver’s viewpoint, yet many otherwise sensible people engage in it.) Some forms of intoxication also make certain crimes seem more rewarding, as well as making punishments seem less threatening. And most of us know people who become aggressive when drunk or high.

However, the immediate effects of intoxication are not the only, or necessarily the most significant, connection between drug taking and crime. Chronic intoxication impairs school and job performance, makes its victims less able to delay gratification, and damages relationships with friends and family. All of these tend to increase criminality.

The second important link between drugs and crime involves drug users’ need for large amounts of quick cash due to the high costs of maintaining an illegal drug habit. The average heavy heroin or cocaine user consumes about $10,000 to $15,000 worth of drugs per year, a sum that most of them cannot generate legally. In one survey of convicted inmates, 39 percent of cocaine and crack users claimed to have committed their current offense in order to get money to buy drugs.

Nonetheless, the economic links between drug use and income-generating crime go both ways. Drug users commit crimes to obtain drug money – in part because their drug use reduces opportunities for legitimate work – but there is also the “paycheck effect.” Just as some heavy drinkers splurge at the local bar on payday, drug-involved offenders may buy drugs because crime gives them the money to do so. Thus income-generating crime may lead to drug use, as well as the other way around.

The drug trade provides the third connection between drugs and crime. Because selling drugs is illegal, business arrangements among dealers cannot be enforced by law. Consequently, territorial disputes among dealers, employer-employee disagreements, and arguments over the price, quantity, and quality of drugs are all subject to settlement by force. Since dealers have an incentive to be at least as well-armed as their competitors, violent encounters among dealers, or between a dealer and a customer, often prove deadly.

Moreover, perpetrators of inter-dealer or dealer-customer violence are unlikely to be apprehended: Enforcement drives transactions into locations that are hidden from the police, and victims – themselves involved in illegal behavior – are unlikely to complain to the authorities. An increasingly common form of drug-market violence is simple robbery-murder by gangs that are in the drug trade only in the sense that hijackers of truckloads of microchips are in the electronics business.

Still, it is not clear how much of the violence among drug dealers is attributable to the drug trade itself, as opposed to the personal propensities of the individuals employed in it, or to the economic, social, and cultural conditions of drug-plagued communities. Violent drug dealers tend to live and work in poor, inner-city neighborhoods, where violence is common independent of the drug business. On an individual level, a willingness to engage in violence is part of the implicit job description of a drug dealer in many markets. And the Darwinian logic of criminal enterprise suggests that surviving dealers are those who are best able to use violence, intimidation, and corruption to protect their positions.

Even the degree to which the drug trade provides the immediate pretext for violence among drug dealers is hard to pin down. Many violent incidents that are commonly described as drug-related – because they occur between dealers, between members of drug-dealing gangs, or at a known dealing location – turn out on close inspection to have personal rather than commercial motives, involving gang territory, an insult, sexual competition, or just a confrontation between two edgy, armed youngsters.

Finally, the drug trade also contributes to crime by diverting inner-city youths away from school and legitimate employment. Not only does drug dealing introduce them to criminal enterprise, it also increases their risk of substance abuse and weakens their prospects for legitimate work (a recorded conviction and prison time are two obvious reasons), all of which make it more likely that they will engage in criminal activity, both in and out of the drug business.

Distinctions among drugs

Connections between drugs and crime vary across drugs. Of the three leading illicit drugs – marijuana, cocaine, and heroin – marijuana is the least implicated in crime. Marijuana users do not typically become violent, and marijuana habits are less expensive to support than cocaine or heroin habits. And marijuana is bought and sold in markets that, while not free of violence, are less violent than cocaine and heroin markets. This is in part because marijuana users make fewer purchases than do heroin or cocaine users, and in part because much marijuana is sold in residential settings by dealers who do not themselves have expensive habits.

Cocaine, on the other hand, is an expensive drug whose use and distribution are often accompanied by violent behavior. Heroin is less often tied to violence than cocaine, but because of the persistence of heroin addiction and the more regular use of the drug, it is possible that heroin addicts typically commit more income-generating crimes over time than cocaine addicts.

This analysis suggests that a reduction in heroin or cocaine use is likely to mean a bigger decrease in crime than a comparable reduction in cannabis use. As for other illegal substances, methamphetamine would tend to resemble heroin and cocaine in this regard, while MDMA (“Ecstasy”) and diverted pharmaceuticals, including painkillers such as Oxycontin and Vicodin and the benzodiazepine tranquillizers such as Valium and Xanax, would look more like cannabis.

Should we legalize?

“Don’t Legalize Drugs” was the title of a recent Wall Street Journal op-ed by John Walters, the current “drug czar.” Normally, an official of cabinet rank would not write an op-ed opposing a policy change that has no serious support in either house of Congress. But the legalization question has a prominence that is wildly disproportionate to its immediate relevance, perhaps because, at first glance, the fit between current drug policies and such treasured ideas as consumer sovereignty, individual liberty, and limited government seems so awkward.

Legalization advocates argue that repealing drug prohibitions would cut crime by eliminating black markets, by removing users and sellers from the criminal world, and by making resources now used to capture and imprison dealers and users available for use in enforcement efforts against predatory crime (and perhaps for crime-reducing drug-treatment programs). Maybe so. But in its rhetorical contest with a hypothetical legalization, prohibition is handicapped by its very reality. Actual policies, as Aristotle first noted about actual regimes, always have flaws that merely imagined ones lack. Planned policies, like many job applicants, look terrific on paper. There is no reason to doubt, however, that the same processes of symbolic politics, bureaucratic management, and interest-group pressure that have distorted the prohibition effort would produce a legalization regime similarly remote from the ideal that we might design in a seminar room.

Whether a change in the legal status of currently illicit drugs would increase or decrease crime would be hard to predict even if the details of legalization proposals were better specified than they usually are. The answer need not be, and probably is not, the same for all currently illicit drugs. The odds are that making marijuana legally available to adults on more or less the same terms as alcohol would reduce crime, because marijuana intoxication, either as a current state or as a habit, does not typically generate either anger or criminal recklessness. Even greatly increased consumption would probably lead to little crime. Yet that conclusion holds only if we assume what seems plausible but is by no means certain: that increased marijuana use would not increase consumption of cocaine or alcohol.

By contrast, the legalization of methamphetamine would undoubtedly increase crime. Methamphetamine markets are still geographically isolated, but that would change with legalization. And unless methamphetamine’s reputation for bringing out aggressive behavior is completely undeserved, the increase in crime related to intoxication and addiction that would result from legalization would easily outweigh any benefits gained from eliminating currently localized illicit markets.

Of course, the main event on the legalization fight-card is not marijuana or methamphetamine but cocaine, which is both the biggest illicit drug market in dollar terms and the one that gets the bulk of law enforcement attention. Would legalizing cocaine reduce crime? No one knows. The effects of cocaine legalization would be so numerous, so profound, and so unpredictable that any strongly expressed opinion on the subject must reflect some mix of insufficient intellectual humility and simple bluff.

We regard cocaine legalization as, on balance, a thoroughly bad idea, but that view is based in part on our belief that better tailored policies could maintain most of the advantages of cocaine prohibition without constantly keeping a couple hundred thousand people behind bars on cocaine charges. Compared to a straight-line projection of current policies – our strong desire for better policies is not accompanied by any optimism about their enactment – cocaine legalization might in fact reduce crime. The crime related to cocaine markets would disappear, but crime due to intoxication or addiction would increase, to some utterly unknown extent. Meanwhile, crime committed to get money to buy cocaine might go up or down, depending on the pricing scheme adopted and the price-elasticity of demand. There is simply no way to know in advance the net effect. And legalization on an “experimental” basis is an oxymoron: The resulting explosion in the size of the markets and the number of addicts would make reinstituting prohibition nearly impossible.

The problem of alcohol

That alcohol alone is responsible for more crime and violence (not to mention deaths due to overdose and disease, fetal damage, accidents, and bad parenting) than all of the illicit drugs combined suggests that legalization is no panacea. Indeed, any serious conversation about altering the legal status of drugs in the name of crime control should start with the topic of alcohol. Until we figure out how to better manage the problems caused by our one currently legal addictive intoxicant, it makes little sense to consider legalizing other drugs.

Economic efficiency dictates that alcohol taxes should be high enough to cover the costs that drinkers impose on others. They are not even close. The right amount would put alcohol taxes at about a dollar per drink, 10 times the current level. Because alcohol consumption, and especially consumption by young drinkers and heavy drinkers, is responsive to price, the effect on alcohol-related crime (including domestic violence and child abuse) would likely be substantial.

To be sure, such a tax would encourage the trafficking and consumption of “moonshine” and other illegal alcohol products, bringing with them black-market crime and dangerously adulterated drinks. But evidence from those foreign countries where alcohol is taxed more highly than in the United States – and from the early 1950s, when U.S. alcohol taxes were, in purchasing-power terms, several times higher than they are now – suggests that these effects would be minor. It appears that the safety and convenience of legal alcohol, and loyalty to legal brands, are sufficient to overwhelm the cost advantage of untaxed products.

A more radical step would be to reduce the legal availability of alcohol to problem drinkers. This could be done by forbidding the sale of alcohol to persons convicted of serious or multiple crimes committed under the influence – in effect, a selective rather than a blanket prohibition. It seems curious that a drunk driver should be deprived of his driving license while the “license” to drink is treated as irrevocable. A prohibition on the sale of alcohol to those with a history of alcohol-related misconduct, even with its inevitable imperfections, would almost certainly be crime-reducing, perhaps substantially so. In addition, it might free police resources by reducing the population of chronic inebriates repeatedly arrested for minor offenses in public order. (In 2000, there were an estimated 1.3 million arrests for drunkenness, disorderly conduct, and vagrancy.)

Making prohibition work better

Polarization around the legalization question conceals the enormous range of policy choices available under the current legal regime, which bans the nonmedical use of all intoxicants other than alcohol. A better designed drug control regime could greatly reduce crime, while leaving those prohibitions in place.

Drug control policies are typically categorized into enforcement, prevention, and treatment. Enforcement is the dominant activity of American drug policy. Domestic law enforcement accounts for about half of the federal drug control budget; including state and local activity, enforcement’s share is about three-quarters of total drug control spending.

Until recently, academic analyses of drug enforcement and crime viewed the price of drugs as a key factor. That analysis turned out to be wrong on two accounts. First, it was assumed that tougher enforcement could substantially increase the price of drugs. The theory was that when enforcement threatened drug traffickers and dealers with the risks of arrest, imprisonment, and the loss of their drugs, money, and physical assets, sellers would charge higher prices as compensation for those risks. It was also assumed that the demand for drugs was, in the short run, relatively unresponsive to price (“inelastic,” in economics terminology), meaning that an enforcement-led price increase would result in greater total expenditures on drugs. The implications of these assumptions were that enforcement would indirectly lead users to commit more crimes for drug money, and that market-related violence would also rise, as dealers battled over a larger revenue pool. It was in principle possible that the fall in abuse-related crime stemming from slightly reduced consumption would be greater than the combined increases in crimes committed by dealers seeking competitive advantage and by users looking for drug money. But the arithmetic of relatively inelastic demand (again, in the short run) was discouraging. (In the long run, higher prices would tend to reduce the incidence of addiction and increase the rate of recovery, thus reducing crime.)

Today, these underlying assumptions look shaky. Recent research indicates that the demand for cocaine and heroin is far more responsive to price than previously assumed, so much so that the old critique of drug law enforcement as counterproductive in terms of predatory crime seems to have been mistaken. If we remained confident that more, or better, drug law enforcement could substantially raise prices, boosting such enforcement in the cocaine and heroin markets would now appear to be an effective, and perhaps cost-effective, crime-control measure.

However, confidence in the ability of enforcement to raise prices has been slipping, even as appreciation of the value of such price increases has been growing. Over the past two decades, a dramatic increase in enforcement efforts has been accompanied by an equally dramatic drop in cocaine and heroin prices. In the 1990s, for example, the number of incarcerated drug offenders roughly doubled, while inflation-adjusted cocaine and heroin prices fell by half. Why cocaine and heroin prices have fallen in the face of increased punishment for drug law violations is something of a puzzle, but the easy replacement of dealers who have been removed from the drug trade is surely part of the explanation.

Suppose that a drug dealer is arrested and imprisoned. That, in effect, creates a job opening for a new dealer. When the first dealer is released from prison and, as is likely, reenters the drug trade, there are now two dealers where once there was one. Writ large, this story suggests that conventional drug enforcement, by imprisoning hundreds of thousands of dealers and thereby drawing hundreds of thousands of others into the drug trade, may significantly increase the long-run supply of dealers. The logical result is downward pressure on prices.

Drug enforcement for crime control

That analysis also provides important backing for the view that retail-oriented enforcement strategies should concentrate on selectively disrupting markets rather than merely making many arrests. The purpose of selective market disruption is to pick out especially violent segments of the market – defined by drug, geography, dealing style, and the identities of the dealing organizations involved – and make it difficult for buyers and sellers in those segments to connect. In particular, moving street markets indoors and disrupting “drug house” operations by pressuring sellers to adopt more discreet dealing strategies can pay big crime-control dividends.

Open street markets present numerous opportunities for conflict and violence – disputes over turf, disputes over customers, disputes between dealers and police, and simple robbery. Indoor markets are less disruptive of neighborhoods and less prone to violence. As Harvard researcher David Kennedy, a leading advocate of arrest-minimizing enforcement strategies, puts it: “All drug markets present trouble for communities, but street drug markets are the worst trouble of all. Eliminating them would be a huge stride toward quelling drug-related violence and disorder.” A notorious crack house, with customers pulling up at all hours of the day and night, attracts disorder and violence. In contrast, a dealer with a cell phone and a pizza-delivery truck – the kind that became more common in New York after the 1990s drug crackdown made flagrant dealing difficult – damages primarily the buyer and those close to the buyer; the transaction itself is fairly innocuous.

Other things being equal, it is better to deploy enforcement resources against drugs whose markets are small and growing than against drug markets that are large and stable or shrinking. Since the effectiveness of a given level of enforcement pressure on a market is inversely proportional to the size of the market, enforcement against a small market should have more impact than enforcement against a large market. Moreover, drug use often spreads in epidemic fashion, much like a communicable disease. New users are particularly “contagious,” in the sense that they are most likely to initiate others. Long-term users, who are more likely to know the pitfalls of prolonged use and often present an unappealing picture of the consequences of addiction, are far less contagious. Therefore, preventing the initiation of new users has a multiplier effect.

Whether enforcement can intervene early enough in an epidemic to make a difference is an open question. One barrier is informational: It’s hard to identify epidemics until they’re already well underway. A second obstacle is operational: Shutting down distribution networks requires a degree of intelligence that is difficult to muster before a rising drug has developed stable distribution networks. As Jon Caulkins of Carnegie Mellon University has suggested, the early stages of distribution involve social rather than commercial networks; these are difficult to penetrate, particularly when the early users have little contact with the criminal justice system. A third impediment is organizational: Mature markets yield more and better cases per unit of enforcement effort than emerging markets, a fact that discourages individual agents and agencies from shifting attention early. These factors make the temptation to “fight the last war” nearly overwhelming. Still, despite all these difficulties, it’s hard to see how shifting enforcement resources to emerging drug threats (at least to emerging threats with strong potential links to nondrug crime) would be anything other than beneficial.

Crime prevention could also be enhanced by reforming sentencing policy. Given limited prison capacity, it makes sense to give priority to housing the most active and violent offenders. Current federal policy is perhaps the most prominent example of the wrong approach. Under federal law, relatively minor participants in drug trafficking, some with no prior arrests, frequently face long mandatory prison terms. According to a Department of Justice analysis, in the early 1990s (when drug offenders accounted for a smaller share of the federal prison population than today) 21 percent of all federal prisoners were “low-level drug law violators” with no record of violence or prior incarceration. Of these, 42 percent were drug couriers (or “mules”), rather than dealers or principals in trafficking organizations. Since those prison cells could instead be holding more dangerous offenders, imposing long mandatory sentences on minor drug offenders tends to increase predatory crime.

Prevention and treatment

Prevention programs, aimed at reducing experimentation and occasional use primarily by children and adolescents, enjoy strong support across the political spectrum. Even modestly successful prevention programs are unambiguously beneficial in reducing crime. They offer the benefit of reduced drug use and reduced drug dealing without any of the unwanted side-effects of enforcement.

That’s the good news about prevention. The bad news is that few prevention programs have demonstrated that they can consistently reduce the number of their subjects who use drugs. The single biggest and best known program, Drug Abuse Resistance Education (DARE), consistently performs poorly in evaluations. The positive results of some pilot programs have often proven difficult to replicate in other settings. And the link between reducing early drug initiation – the usual measure of effectiveness applied to prevention programs – and preventing future addiction and crime is assumed, rather than demonstrated. According to the most thorough research, even the best prevention programs are only about as cost-effective as typical enforcement efforts in reducing cocaine consumption, and far less cost-effective than ordinary drug-treatment programs. And that relatively unfavorable assessment may well be optimistic, since such research arguably understates the educational cost of having children spend time in drug-prevention sessions instead of English or math class.

One of the problems with prevention programs is that the majority of the participants are not likely candidates for developing serious drug habits. By contrast, treatment deals with users who have established drug problems. Successful or even partially successful treatment of drug-involved offenders is an unequivocal winner from a crime-control perspective. The criminal activity of addict-offenders seems to rise and fall in step with their drug consumption, and that relationship holds whether reductions in drug use are unassisted or are the product of formalized treatment, and whether participation in treatment is voluntary or coerced. Moreover, a treatment-induced reduction in demand does not bring with it the side-effects of an enforcement-induced reduction – higher drug prices and the depletion of criminal justice resources. Since many drug-involved offenders sell drugs in addition to using them, and some may exit the drug trade if they gain control over their own habits, treatment has supply-reduction as well as demand-reduction benefits.

While most popular and much scholarly discussion of treatment focuses on rates of “success” – conventionally defined as complete abstinence one year later – treatment can have powerful crime-control benefits even if it does not permanently end drug use. Findings from the Treatment Outcome Prospective Study (TOPS) – to date the most comprehensive study of treatment effectiveness – indicate that most of the treatment-related reduction in criminal activity occurs during treatment. Indeed, the within-treatment crime reductions are sufficient to justify the cost of treating criminally active users, even assuming no effect on post-treatment behavior.

Thus the inadequate availability and poor quality of substance-abuse treatment constitute a missed opportunity for crime control. Advocates of drug treatment, including the providers themselves, are understandably frustrated and outraged that, in a political atmosphere where the punitive side of the crime-control effort enjoys widespread support and growing funding, drug treatment remains neglected and underfunded.

A special source of outrage is the underprovision and overregulation of opiate maintenance therapy. In terms of crime-control efficacy and other measurable improvements in the behavior and well-being of its clients, methadone treatment is by far the most dramatically successful kind of drug treatment. Unlike most forms of treatment, it has little trouble attracting and retaining clients. Yet, because it does not promise or even attempt to cure addiction, methadone, along with other maintenance therapies, remains bitterly controversial. Today, it is so buried under various regulatory burdens that only about one-eighth of U.S. heroin addicts are currently enrolled in methadone programs. But recent regulatory changes that make methadone more widely available and that will ease the approval of buprenorphine, another maintenance agent, may mark the start of a new era for treatment.

Drug testing and sanctions

Several factors limit the capacity of drug treatment, whether voluntary or coerced, to reduce crime. These include the limited availability of treatment, deficiencies in technique and quality, the reluctance of many drug-involved offenders to undergo treatment, and the administrative and procedural difficulties of coerced treatment. It is commonly thought that the limits on treatment are also the limits on the criminal justice system’s ability to influence the drug taking of those under its jurisdiction. This view, however, assumes that most users of expensive illicit drugs suffer from clinically diagnosable substance-abuse or dependency disorders, that they have no volitional control over their drug taking, and that all such disorders are invariably chronic and go into remission only with professional intervention. Happily, not one of these propositions is true.

Many users, even frequent users, of cocaine, heroin, and methamphetamine do not meet clinical criteria for substance abuse or dependency. (“Substance abuse” as a legal matter merely means use of a prohibited drug, or use of a prescription drug for nonmedical reasons or without a valid prescription; “substance abuse” as a medical matter is defined by criteria such as escalation of dosage and frequency, narrowing of the behavioral repertoire, loss of control over use, and continued use despite adverse consequences.) Even for those who do meet these clinical criteria, actual consumption is not a constant but rather varies with the availability of the drug and the consequences, especially the more-or-less immediate consequences, of taking it. Incentives influence drug use, even within the treatment context. Monitoring drug use by urine testing enhances treatment outcomes, as does the provision of even very small rewards for compliance. Moreover, while the minority of substance-abusing or substance-dependent individuals who suffer from chronic forms of those disorders makes up a large proportion of the population in treatment, the most common pattern of substance abuse is a single active period followed by “spontaneous” (i.e., not treatment-mediated) remission.

All this being the case, persuading or forcing drug-using offenders into treatment is not the only way to reduce their drug consumption. An alternative to requiring treatment is to mandate desistance from the use of illicit drugs for persons on probation, parole, or pretrial release. Desistance can be enforced by frequent drug tests, with predictable and nearly immediate sanctions for each missed test or incident of detected drug use. While in the long term drug-involved offenders who remain drug-involved are likely to be rearrested and eventually incarcerated, those long-term and probabilistic threats, even if the penalties involved are severe, may be less effective than short-term, but more certain, sanctions.

For those offenders whose drug use is subject to their volitional control, testing-and-sanctions programs can reduce the frequency of drug use. Those unable to control themselves, even under threat, will be quickly identified, and in a way that is likely to break through the denial that often characterizes substance-abuse disorders. Identifying these persons will enable the system to direct treatment resources to those most in need of them. It will also help create a “therapeutic alliance” between treatment providers and clients by giving clients strong incentives to succeed, as opposed merely to wanting the therapists off their backs.

Since recent arrestees account for most of the cocaine and heroin sold in the United States, and therefore for most of the revenues of the illicit markets, an effective testing-and-sanctions program would have a larger impact on the volume of the illicit trade – and presumably on the side-effects it generates, including the need for drug law enforcement and related imprisonment – than any other initiative that could be undertaken. By one estimate, a national program of this type could reasonably be expected to shrink total hard-drug volumes by 40 percent.

Keys to success

To succeed, such a program would need money and facilities for drug testing, probation caseloads small enough to be adequately monitored, and either judges willing to sanction predictably or probation departments with the authority to impose administrative sanctions. It would require police officers – or probation officers with arrest authority – to seek out absconders, the capability to carry out sanctions (for example, supervisors for “community service” labor, and confinement capacity appropriate for one- and two-day stays), and treatment capacity for those who proved unable to quit without professional help. Offering rewards for compliance, if only the remission of previously imposed fines, in addition to punishments for noncompliance would improve success rates and reduce overall costs.

Another key to success is the immediacy and certainty of the sanctions and rewards. This in turn depends on keeping the population assigned to the program small compared to the resources available, until the program has had a chance to establish the credibility of its threats. That credibility will minimize violation rates and thus the need for the actual imposition of sanctions. Maryland’s well-publicized venture into testing and sanctions, under the rubric “Breaking the Cycle,” ran aground on its failure to establish sanctions credibility, the result of an overly large target population, lack of administrative follow-through, and judicial reluctance to punish detected drug use.

Once up and running, testing-and-sanctions programs can pay great dividends. By one calculation, a successful program can operate on just over $3,000 per offender per year, or about 15 percent of the cost of imprisonment. Thus a national program, covering all offenders with identifiable hard-drug problems, could be implemented for between $6 and $8 billion per year, a sum that would be more than recovered by the consequent reductions in imprisonment for both the drug-involved offenders and the drug dealers who would have to leave the business as their best customers were denied them.

While there is clear evidence that testing and sanctions, even imperfectly administered, can substantially reduce drug use among offenders, only a full-scale trial of the idea would tell us whether the promises of its advocates are overblown. But the big question is not whether offenders would respond to the program, but whether our creaky criminal justice machinery is capable of putting it into practice.

The need for cooperation among multiple agencies – state, county, and municipal; judicial, administrative, and nongovernmental – greatly increases the difficulty of successful implementation. Formulaic sanctioning requires limiting judicial discretion, either by getting the judges out of the process entirely or by persuading judges to put their actions on autopilot. The former is unpopular with judges, who remain quite influential in policymaking, and the latter is difficult to bring about. As one researcher remarked after evaluating such a program, changing the behavior of addicts is easy, but changing the behavior of judges is hard.

Ideologically, the testing-and-sanctions approach tends to be too tough-sounding and insufficiently therapeutic to appeal to treatment advocates – who are morally outraged at the notion that someone with a disease could be punished for manifesting one of its symptoms – and yet not draconian enough to excite the drug warriors. The latter group would prefer to use drug test results as the basis for revoking probation or parole status and incarcerating the offender for months rather than hours, an approach inconsistent with both certainty or swiftness in sanctioning.

Whether these barriers can be overcome remains uncertain, but current evidence is not encouraging. A study by the California Policy Research Center found that California probationers, even when nominally subject to drug testing, faced only sporadic tests and highly inconsistent sanctions. In Maryland, where Lieutenant Governor Kathleen Kennedy Townsend attempted a full-scale implementation of the idea, resistance from the courts, the probation departments, and treatment advocates and their legislative allies caused the sanctions portion of the program to falter badly. Despite evidence that testing had reduced offender drug use substantially, journalistic accounts stressed the program’s difficulties, and the resulting bad publicity probably contributed to Townsend’s loss in last year’s gubernatorial election. At the national level, President Bush, like his predecessor, has embraced the idea rhetorically but failed to follow through in practice.

A crime-minimizing drug policy

The testing-and-sanctions idea is the only single proposal with the potential to reduce drug-related crime swiftly and dramatically. Unfortunately, that promise depends on the mobilization of more political and administrative muscle than may in fact be available. But other policy changes still offer the possibility of significant reductions in drug-related crime – raising taxes on alcohol, forbidding alcohol sales to the minority of drinkers most prone to breaking the law under the influence, expanding drug treatment and in particular opiate maintenance therapy, and redirecting drug law enforcement, prosecution, and sentencing to minimize trafficking-related violence by targeting the most flagrant markets and the most violent dealers. No drug policy can deliver a drug-free society. But smarter policies can give us a safer one.

David Boyum is a public policy and management consultant in New York City and co-author of What the Numbers Say (Broadway Books, 2003).

Mark A. R. Kleiman is professor of public policy at UCLA and author of Against Excess: Drug Policy for Results (Basic Books, 1993).

A genealogy of anti-Americanism

Summer 2003

By James W. Ceaser

America’s rise to the status of the world’s premier power, while inspiring much admiration, has also provoked widespread feelings of suspicion and hostility. In a recent and widely discussed book on America, Apr’s L’Empire, credited by many with having influenced the position of the French government on the war in Iraq, Emmanuel Todd writes: “A single threat to global instability weighs on the world today: America, which from a protector has become a predator.” A similar mistrust of American motives was clearly in evidence in the European media’s coverage of the war. To have followed the war on television and in the newspapers in Europe was to have witnessed a different event than that seen by most Americans. During the few days before America’s attack on Baghdad, European commentators displayed a barely concealed glee – almost what the Germans call schadenfreude – at the prospect of American forces being bogged down in a long and difficult engagement. Max Gallo, in the weekly magazine Le Point, drew the typical conclusion about American arrogance and ignorance: “The Americans, carried away by the hubris of their military power, seemed to have forgotten that not everything can be handled by the force of arms … that peoples have a history, a religion, a country.”

Time will tell, of course, if Gallo was even near correct in his doubts about U.S. policy. But the haste with which he arrived at such sweeping conclusions leads one to suspect that they were based far more on a pre-existing view of America than on an analysis of the situation at hand. Indeed, they were an expression of one of the most powerful modes of thought in the world today: anti-Americanism. According to the French analyst Jean Fran’ois Revel, “If you remove anti-Americanism, nothing remains of French political thought today, either on the Left or on the Right.” Revel might just as well have said the same thing about German political thought or the thought of almost any Western European country, where anti-Americanism reigns as the lingua franca of the intellectual class.

The symbolic America

Anti-Americanism rests on the singular idea that something associated with the United States, something at the core of American life, is deeply wrong and threatening to the rest of the world. This idea is certainly nothing new. Over a half-century ago, the novelist Henry de Montherlant put the following statement in the mouth of one of his characters (a journalist): “One nation that manages to lower intelligence, morality, human quality on nearly all the surface of the earth, such a thing has never been seen before in the existence of the planet. I accuse the United States of being in a permanent state of crime against humankind.” America, from this point of view, is a symbol for all that is grotesque, obscene, monstrous, stultifying, stunted, leveling, deadening, deracinating, deforming, and rootless.

It is tempting to call anti-Americanism a stereotype or a prejudice, but it is much more than that. A prejudice, at least an ordinary one, is a shortcut usually having some basis in experience that people use to try to grasp reality’s complexities. Although often highly erroneous, prejudices have the merit that those holding them will generally revisit and revise their views when confronted with contrary facts. Anti-Americanism, while having some elements of prejudice, has been mostly a creation of “high” thought and philosophy. Some of the greatest European minds of the past two centuries have contributed to its making. The concept of America was built in such a way as to make it almost impervious to refutation by mere facts. The interest of these thinkers was not always with a real country or people, but more often with general ideas of modernity, for which “America” became the name or symbol. Indeed, many who played a chief part in discovering this symbolic America never visited the United States or showed much interest in its actual social and political conditions. The identification of America with a general idea or concept has gone so far as to have given birth to new words that are treated nowadays as normal categories of thought, such as “Americanization” or “Americanism.” (By contrast, no one speaks of Venezuelanization or New Zealandism.) Americanization today, for example, is almost the perfect synonym for the general concept of “globalization,” differing only in having a slightly more sinister face.

Although anti-Americanism is a construct of European thought, it would be an error to suppose that it has remained confined to its birthplace. On the contrary, over the last century anti-Americanism has spread out over much of the globe, helping, for example, to shape opinion in pre-World War II Japan, where many in the elite had studied German philosophy, and to influence thinking in Latin American and African countries today, where French philosophy carries so much weight. Its influence has been considerable within the Arab world as well. Recent accounts of the intellectual origins of contemporary radical Islamic movements have demonstrated that their views of the West and America by no means derive exclusively from indigenous sources, but have been largely drawn from various currents of Western philosophy. Western thought is at least in part responsible for the innumerable fatwahs and the countless jihads that have been pronounced against the West. What has been attributed to a “clash of civilizations” has sometimes been no more than a facet of internecine intellectual warfare, conducted with the assistance of mercenary forces recruited from other cultures. It is vitally important that we understand the complex intellectual lineage behind anti-Americanism. Our aim should be to undo the damage it has wrought, while not using it as an excuse to shield this country from all criticism.

Degeneracy and monstrosity

Developed over a period of more than two centuries by many diverse thinkers, the concept of America has involved at least five major layers or strata, each of which has influenced those that succeeded it. The initial layer, found in the scientific thought of the mid-eighteenth century, is known as the “degeneracy thesis.” It can be conceived of as a kind of prehistory of anti-Americanism, since it occurred mostly before the founding of the United States and referred not just to this country but to all of the New World. The thesis held that, due chiefly to atmospheric conditions, in particular excessive humidity, all living things in the Americas were not only inferior to those found in Europe but also in a condition of decline. An excellent summary of this position appears, quite unexpectedly, in The Federalist Papers. In the midst of a political discussion, Publius (Alexander Hamilton) suddenly breaks in with the comment: “Men admired as profound philosophers gravely asserted that all animals, and with them the human species, degenerate in America — that even dogs cease to bark after having breathed awhile in our atmosphere.” The oddity of this claim does not belie the fact that it was regarded for a time as cutting-edge science. As such, it merited lengthy responses from two of America’s most notable scientific thinkers, Benjamin Franklin and Thomas Jefferson. In Jefferson’s case, the better part of his only book, Notes on the State of Virginia, consists of a detailed response to the originator of this thesis and the leading biologist of the age, the Count de Buffon. The interest of Franklin and Jefferson in refuting this thesis went beyond that of pure science to practical politics. Who in Europe would be willing to invest in and support the United States if America were regarded as a dying continent?

Although Buffon was its originator, the most earnest and best known proponent of the degeneracy thesis at the time was Cornelius de Pauw, whom Hamilton cited for the aforementioned claim of canine quietude. Pauw’s three-volume study of America, which was widely regarded as the book on the subject, begins with the observation that “it is a great and terrible spectacle to see one half of the globe so disfavored by nature that everything found there is degenerate or monstrous.” (The attribution of monstrosity, seemingly in tension with the more general characteristic of contraction, was thought to apply to many of the lower species, such as lizards, snakes, reptiles, and insects, producing a still more sinister picture of America.) It was Pauw who insisted as well on the inevitability of an ongoing and active degeneration in America, a point on which Buffon equivocated: No sooner did the Europeans debark from their ships than they began the process of decline, physical and mental. America, accordingly, would never be able to produce a political system or culture of any merit. Paraphrasing a sentence of Pauw’s, the great Encyclopedist Abb- Raynal famously opined: “America has not yet produced a good poet, an able mathematician, one man of genius in a single art or a single science.”

Rationalistic illusions

The degeneracy thesis could not in the end stand up to Franklin’s and Jefferson’s careful empirical criticisms, which demonstrated that nothing, on the surface at least, was degenerating at an unusual rate in America. Nature, as Jefferson so felicitously put it, was the same on both sides of the Atlantic. But what their responses could not entirely refute was the contention that the quality of life and the political system of America were inferior. Precisely this claim lay at the core of the second layer of anti-American thought, developed by a number of romantic thinkers in the early part of the nineteenth century. These thinkers placed degeneracy – for almost the same language was used – on a new theoretical foundation, arguing that it resulted not from the workings of the physical environment but from the intellectual ideas on which the United States had been founded. Anti-Americanism now became what it has remained ever since, a doctrine applicable exclusively to the United States, and not Canada or Mexico or any other nation of the New World. Many who complain bitterly that the United States has unjustifiably appropriated the label of America have nonetheless gladly allowed that anti-Americanism should refer only to the United States.

The romantics’ interpretation of America owed something to the French Revolution, which inspired loathing among conservative philosophers such as Edmund Burke and Joseph de Maistre. The French Revolution was seen as an attempt to remake constitutions and societies on the basis of abstract and universal principles of nature and science. The United States, as the precursor of the French Revolution, was often implicated in this critique. These philosophers’ major claim was that nothing created or fashioned under the guidance of universal principles or with the assistance of rational science – nothing, to use The Federalist’s words, constructed chiefly by “reflection and choice” – was solid or could long endure. Joseph de Maistre went so far as to deny the existence of “man” or “humankind,” such as in the Declaration of Independence’s statement that “all men are created equal.” According to Maistre, “There is no such thing in this world as man; I have seen in my life French, Italians, and Russians … but as for man, I declare that I have never met one in my life; if he exists, it is entirely without my knowledge.” Not only was the Declaration based on flawed premises, but so too was the U.S. Constitution with its proposition that men could establish a new government. “All that is new in [America’s] constitution, all that results from common deliberation,” Maistre warned, “is the most fragile thing in the world: one could not bring together more symptoms of weakness and decay.”

By the early nineteenth century, as the principal surviving society based on an Enlightenment notion of nature, America became the target of many romantic thinkers. Instead of human reason and rational deliberation, romantic thinkers placed their confidence in the organic growth of distinct and separate communities; they put their trust in history. Now, merely by surviving – not to mention by prospering – the United States had refuted the charges of the inherent fragility of societies founded with the aid of reason. But the romantics went on to charge that America’s survival was at the cost of everything deep or profound. Nothing constructed on the thin soil of Enlightenment principles could sustain a genuine culture. The poet Nikolaus Lenau, sometimes referred to as the “German Byron,” provided the classic summary of the anti-American thought of the romantics: “With the expression Bodenlosigkeit [rootlessness] I think I am able to indicate the general character of all American institutions; what we call Fatherland is here only a property insurance scheme.” In other words, there was no real community in America, no real volk. America’s culture “had in no sense come up organically from within.” There was only a dull materialism: “The American knows nothing; he seeks nothing but money; he has no ideas.” Then came Lenau’s haunting image, reminiscent of Pauw’s picture of America: “the true land of the end, the outer edge of man.”

Even America’s vaunted freedom was seen by many romantics as an illusion. American society was the very picture of a deadening conformity. The great romantic poet Heinrich Heine gave expression to this sentiment: “Sometimes it comes to my mind/To sail to America/To that pig-pen of Freedom/Inhabited by boors living in equality.” America, as Heine put it in his prose writing, was a “gigantic prison of freedom,” where the “most extensive of all tyrannies, that of the masses, exercises its crude authority.”

The specter of racial impurity

A third stratum of thought in the development of anti-Americanism was the product of racialist theory, first systematically elaborated in the middle of the nineteenth century. To understand today why this thought qualifies as anti-American requires, of course, allowing oneself to think in the framework of another period. The core of racialist theory was the idea that the various races of man – with race understood to refer not only to the major color groups but to different subgroups such as Aryans, Slavs, Latins, and Jews – are hierarchically arranged in respect to such important qualities as strength, intelligence, and courage. A mixing of the races was said to be either impossible, in the sense that it could not sustain biological fecundity; or, if fecundity was sustainable, that it would result in a leveling of the overall quality of the species, with the higher race being pulled down as a result of mingling with the lower ones.

The individual most responsible for elaborating a complete theory of race was Arthur de Gobineau, known today as the father of racialist thinking. Gobineau’s one- thousand-page opus, Essay on the Inequality of the Human Races, focused on the fate of the Aryans, whom he considered the purest and highest of all the races. His account was deeply pessimistic, as he argued that the Aryans were allowing themselves to be bred out of existence in Europe. America became an important focus of his analysis since, as he explained, many at the time championed America as the Great White Hope, the nation in which the Aryans (Anglo-Saxons and Nordics) would reinvigorate their stock and reassert their rightful dominance in the world. In this view, while America’s formal principle was democracy, its real constitution was that of Anglo-Saxon racial hegemony. But Gobineau was convinced that this hope was illusory. The universalistic idea of natural equality in America was in fact promoting a democracy of blood, in which the very idea of “race,” which was meant to be a term of distinction, was vanishing. Europe was dumping its “garbage” races into America, and these had already begun to mix with the Anglo-Saxons.

With notable perspicacity, Gobineau foresaw the Tiger Woods phenomenon. The natural result of the democratic idea, he argued, was amalgamation. America was creating a new “race” of man, the last race, the human race – which was no race at all. Gobineau modeled his system on Hegel’s philosophy of history, substituting blood for Spirit as the active motor of historical movement. The elimination of race marked the end of history. It presented – and here one could, in his view, see America’s future – a lamentable spectacle of creatures of the “greatest mediocrity in all fields: mediocrity of physical strength, mediocrity of beauty, mediocrity of intellectual capacities – we could almost say nothingness.”

Racialist ideas persisted throughout the nineteenth century and affected many of the social sciences, especially anthropology, a discipline that remains so traumatized by its origins that even today it cannot treat questions of race without indulging in paroxysms of guilt. The extreme of racialist thinking in the early twentieth century served as the foundation of Nazism. Today, the substance of the racialist philosophy is rejected except by a few elements on the extreme right. Yet traces of it have managed to find their way, often unconsciously, into subsequent theorizing about America. The European anti-American Left today has been divided in its criticisms of race in relation to America. Some follow the analysis, though not the evaluations, of Gobineau, arguing that the universal principles in the American experience, when they have not produced the brutal repression of the “Other” (the Indian and African), have fostered blandness and homogeneity. Alternatively, it is sometimes said that the process of amalgamation is not proceeding rapidly enough, especially in regard to African Americans. America is tardy and hypocritical in its promise to eliminate race as a basis of social and political judgment.

The empire of technology

The fourth stratum in the construction of anti-Americanism was created during the era of heavy industrialization in the late nineteenth and early twentieth centuries. America was now associated with a different kind of deformation, this time in the direction of the gigantesque and the gargantuan. America was seen as the source of the techniques of mass production and of the methods and the mentality that supported this system. Nietzsche was an early exponent of this view, arguing that America sought the reduction of everything to the calculable in an effort to dominate and enrich: “The breathless haste with which they [the Americans] work – the distinctive vice of the new world – is already beginning ferociously to infect old Europe and is spreading a spiritual emptiness over the continent.” Long in advance of Hollywood movies or rap music, the spread of American culture was likened to a form of disease. Its progress in Europe seemed ineluctable. “The faith of the Americans is becoming the faith of the European as well,” Nietzsche warned.

It was Nietzsche’s disciples, however, who transformed the idea of America into an abstract category. Arthur Moeller Van den Bruck, best known for having popularized the phrase “The Third Reich,” proposed the concept of Amerikanertum (Americanness) which was to be “not geographically but spiritually understood.” Americanness marks “the decisive step by which we make our way from a dependence on the earth to the use of the earth, the step that mechanizes and electrifies inanimate material and makes the elements of the world into agencies of human use.” It embraces a mentality of dominance, use, and exploitation on an ever-expanding scale, or what came to be called the mentality of “technologism” (die Technik): “In America, everything is a block, pragmatism, and the national Taylor system.” Another author, Paul Dehns, entitled an article, significantly, “The Americanization of the World.” Americanization was defined here in the “economic sense” as the “modernization of methods of industry, exchange, and agriculture, as well as all areas of practical life,” and in a wider and more general sense as the “uninterrupted, exclusive and relentless striving after gain, riches and influence.”

Soullessness and rampant consumerism

The fifth and final stratum in the construction of the concept of anti-Americanism – and the one that still most powerfully influences contemporary discourse on America – was the creation of the philosopher Martin Heidegger. Like his predecessors in Germany, Heidegger once offered a technical or philosophical definition of the concept of Americanism, apart, as it were, from the United States. Americanism is “the still unfolding and not yet full or completed essence of the emerging monstrousness of modern times.” But Heidegger in this case clearly was less interested in definitions than in fashioning a symbol – something more vivid and human than “technologism.” In a word – and the word was Heidegger’s – America was katestrophenhaft, the site of catastrophe.

In his earliest and perhaps best known passages on America, Heidegger in 1935 echoed the prevalent view of Europe being in a “middle” position:

Europe lies today in a great pincer, squeezed between Russia on the one side and America on the other. From a metaphysical point of view, Russia and America are the same, with the same dreary technological frenzy and the same unrestricted organization of the average man.
Even though European thinkers, as the originators of modern science, were largely responsible for this development, Europe, with its pull of tradition, had managed to stop well short of its full implementation. It was in America and Russia that the idea of quantity divorced from quality had taken over and grown, as Heidegger put it, “into a boundless et cetera of indifference and always the sameness.” The result in both countries was “an active onslaught that destroys all rank and every world creating impulse…. This is the onslaught of what we call the demonic, in the sense of destructive evil.”

America and the Soviet Union comprised, one might say, the axis of evil. But America, in Heidegger’s view, represented the greater and more significant threat, as “Bolshevism is only a variant of Americanism.” In a kind of overture to the Left after the Second World War, Heidegger spoke of entering into a “dialogue” with Marxism, which was possible because of its sensitivity to the general idea of history. A similar encounter with Americanism was out of the question, as America was without a genuine sense of history. Americanism was “the most dangerous form of boundlessness, because it appears in a middle class way of life mixed with Christianity, and all this in an atmosphere that lacks completely any sense of history.” When the United States declared war on Germany, Heidegger wrote: “We know today that the Anglo Saxon world of Americanism is resolved to destroy Europe…. The entry of America into this world war is not an entry into history, but is already the last American act of American absence of historical sense.”

In creating this symbol of America, Heidegger managed to include within it many of the problems or maladies of modern times, from the rise of instantaneous global communication, to an indifference to the environment, to the reduction of culture to a commodity for consumption. He was especially interested in consumerism, which he thought was emblematic of the spirit of his age: “Consumption for the sake of consumption is the sole procedure that distinctively characterizes the history of a world that has become an unworld…. Being today means being replaceable.” America was the home of this way of thinking; it was the very embodiment of the reign of the ersatz, encouraging the absorption of the unique and authentic into the uniform and the standard. Heidegger cited a passage from the German poet Rainer Maria Rilke:

Now is emerging from out of America pure undifferentiated things, mere things of appearance, sham articles…. A house in the American understanding, an American apple or an American vine has nothing in common with the house, the fruit, or the grape that had been adopted in the hopes and thoughts of our forefathers.
Following Nietzsche, Heidegger depicted America as an invasive force taking over the soul of Europe, sapping it of its depth and spirit: “The surrender of the German essence to Americanism has already gone so far as on occasion to produce the disastrous effect that Germany actually feels herself ashamed that her people were once considered to be ‘the people of poetry and thought.'” Europe was almost dead, but not quite. It might still put itself in the position of being ready to receive what Heidegger called “the Happening,” but only if it were able to summon the interior strength to reject Americanism and push it back to the other hemisphere.

Heidegger’s political views are commonly deplored today because of his early and open support of Nazism, and many suppose that his influence on subsequent political thought in Europe has been meager. Yet nothing could be further from the truth. Heidegger’s major ideas were sufficiently protean that with a bit of tinkering they could easily be adopted by the Left. Following the war, Heidegger’s thought, shorn of its national socialism but fortified in its anti-Americanism, was embraced by many on the left, often without attribution. Through the writings of thinkers like John-Paul Sartre, “Heideggerianism” was married to communism, and this odd coupling became the core of the intellectual Left in Europe for the next generation. Communist parties, for their own obvious purposes, seized on the weapon of anti-Americanism. They employed it with such frequency and efficacy that it widely came to be thought of as a creation of communism that would vanish if ever communism should cease. The collapse of communism has served, on the contrary, to reveal the true depth and strength of anti-Americanism. Uncoupled from communism, which gave it a certain strength but also placed limits on its appeal, anti-Americanism has worked its way more than ever before into the mainstream of European thought.

Only one claw of the infamous Heideggerian pincer now remains, one clear force threatening Europe. If Europe once found identity in being in “the middle” (or as a “third force”), many argue today that it must find its identity in becoming a “pole of opposition” to America (and the leader of a “second force”). Emmanuel Todd develops this logic in his book, arguing that Europe should put together a new “entente” with Russia and Japan that would serve as a counterforce to the American empire.

The real clash of civilizations?

There is a great need today for both Europeans and Americans to understand the career of this powerful doctrine of anti-Americanism. As long as its influence remains, rational discussion of the practical differences between America and Europe becomes more and more difficult. No issue or question is addressed on its merits, and instead commentators tend to reason from conclusions to facts rather than from facts to conclusions. Arguments, no matter how reasonable they appear on the surface, are advanced to promote or confirm the pre-existing concept of America constructed by Heidegger and others. In the past, European political leaders had powerful reasons to resist this approach. Such practical concerns as alliances, the personal ties and contacts forged with American officials, commercial relations, and a fear of communism worked to dampen anti-Americanism. But of late, European leaders have been tempted to use anti-Americanism as an easy way to court favor with parts of the public, especially with intellectual and media elites. This has unfortunately added a new level of legitimacy to the anti-American mindset.

Not only does anti-Americanism make rational discussion impossible, it threatens the idea of a community of interests between Europe and America. Indeed, it threatens the idea of the West itself. According to the most developed views of anti-Americanism, there is no community of interests between the two sides of the Atlantic because America is a different and alien place. To “prove” this point without using such obvious, value-laden terms as “degeneracy” or the “site of catastrophe,” proponents invest differences that exist between Europe and America with a level of significance all out of proportion with their real weight. True, Europeans spend more on the welfare state than do Americans, and Europeans have eliminated capital punishment while many American states still employ it. But to listen to the way in which these facts are discussed, one would think that they add up to different civilizations. This kind of analysis goes so far as to place in question even the commonality of democracy. Since democracy is now unquestionably regarded as a good thing – never mind, of course, that such an attachment to democracy arguably constitutes the most fundamental instance of Americanization – America cannot be a real democracy. And so it is said that American capitalism makes a mockery of the idea of equality, or that low rates of voting participation disqualify America from being in the camp of democratic states.

Repairing the breach

Hardly any reasonable person today would dismiss the seriousness of many of the challenges that have been raised against “modernity.” Nor would any reasonable person deny that America, as one of the most modern and the most powerful of nations, has been the effective source of many of the trends of modernity, which therefore inevitably take on an American cast. But it is possible to acknowledge all of this without identifying modernity with a single people or place, as if the problems of modernity were purely American in origin or as if only Europeans, and not Americans, have been struggling with the question of how to deal with them. Anti-Americanism has become the lazy person’s way of treating these issues. It allows those using this label to avoid confronting some of the hard questions that their own analysis demands be asked. To provide just one striking example, America is regularly criticized for being too modern (it has, for example, developed “fast food”), except when it is criticized for not being modern enough (a large portion of the population is still religious).

A genuine dialogue between America and Europe will become possible only when Europeans start the long and arduous process of freeing themselves from the grip of anti-Americanism – a process, fortunately, that several courageous European intellectuals have already launched. But it is also important for Americans not to fall into the error of using anti-Americanism as an excuse to ignore all criticisms made of their country. This temptation is to be found far more among conservative intellectuals than among liberals, who have traditionally paid great respect to the arguments of anti-American thinkers. Much recent conservative commentary has been too quick to dismiss challenges to current American strategic thinking and immediately to attribute them, without sufficient analysis, to the worst elements found in the historical sack of anti-Americanism, from anti-technologism to anti-Semitism. It would be more than ironic – it would be tragic — if in combating anti-Americanism, we were to embrace an ideology of anti-Europeanism.

James W. Ceaser is professor of politics at the University of Virginia and co-author of The Perfect Tie: The True Story of the 2000 Presidential Election (Rowman & Littlefield, 2001).

The Meaning of Life – in the Laboratory

Winter 2002

By Leon R. Kass

For much of the past year, the United States was absorbed in a difficult moral debate about whether the federal government should fund research on human embryonic stem cells, cells derived from early embryos produced by in vitro fertilization in assisted-reproduction clinics. Proponents touted the life-saving and disease-curing promise of these pluripotent cells, which may someday enable doctors to replace the damaged tissues of spinal cord injury, juvenile diabetes, or Parkinson’s disease, among others. Opponents objected to the necessary exploitation and destruction of the human embryos from which the stem cells are extracted. In August, in his first major televised address to the nation, President Bush announced his decision. Reaffirming the principle that nascent life not be destroyed for the sake of research, yet eager to explore the possible therapeutic benefits of these cells, he chose to permit federal funds to be used for research only on already existing embryonic stem cell lines. At the same time, he announced the creation of a President’s Council on Bioethics to monitor stem cell research and to consider all of the medical and ethical ramifications of biomedical innovation. I have been appointed to chair this council.

Neither I nor The Public Interest are new to these matters. In the 1970s, I published two essays in these pages on laboratory-assisted reproduction and manipulation of embryos (“Making Babies – the New Biology and the ‘Old’ Morality,” Winter 1972; and “‘Making Babies’ Revisited,” Winter, 1979), the second based on testimony I had given before an NIH Ethics Advisory Board on the question of federal funding of human embryo research. In view of ongoing public interest in these matters and my role in the future deliberations, the editors have decided to reprint a version of the latter essay, which appeared in my 1985 book, Toward a More Natural Science: Biology and Human Affairs (The Free Press). Readers will see immediately that the questions we face today are not identical to those of 20 years ago. No one then was talking about stem cells or the prospects for regenerative medicine. Moreover, this essay does not entirely represent my current thinking on these matters, and the policy option of limited federal funding adopted by President Bush is not even considered here. Nevertheless, I believe that this essay may still be useful to our current deliberations, less for the particular positions it takes, more for the approach it recommends and the questions it insists on considering. What does it mean to treat nascent human life as raw material to be exploited as a mere natural resource? What are the likely future technical possibilities and moral problems that present decisions are willy-nilly creating? What moral boundaries should researchers observe, whether they work with federal or with private funds? What are the goals of, and what are the proper limits to, the project for the mastery of human nature?

The current stem cell debate, like so many other arguments about biomedical technology, neglects these larger questions. In addition, both the proponents and the opponents of embryo research employ the same, rather limited, “vitalist” moral principle: Stem cell research will save lives (of children and adults); obtaining the stem cells destroys lives (of embryonic human beings). Our society needs to realize there is more at stake in the biological revolution than just saving life or avoiding death. We must also strive to protect and preserve human dignity and the ideas and practices that keep us human. This essay, though dated, remains an invitation to remember these human and moral concerns, concerns that are themselves manifestations of what is humanly most worth preserving. It is my intention to keep these larger and profoundly important matters central to the work of the President’s Council on Bioethics.
–Leon Kass, October 2001

People will not look forward to posterity who never look backward to their ancestors.
–Edmund Burke

What’s a nice embryo like you doing in a place like this?
–Traditional

The readers of Aldous Huxley’s novel, like the inhabitants of the society it depicts, enter into the Brave New World through “a squat gray building … the Central London Hatchery and Conditioning Centre,” beginning, in fact, in the Fertilizing Room. There, three hundred fertilizers sit bent over their instruments, inspecting eggs, immersing them “in warm bouillon containing free-swimming spermatozoa,” and incubating the successfully fertilized eggs until they are ripe for bottling (or Bokanovskification). Here, most emphatically, life begins with fertilization – in the laboratory. Life in the laboratory is the gateway to the Brave New World.

We stand today fully on the threshold of that gateway. How far and how fast we travel through this entrance is not a matter of chance or necessity but rather a matter of human decision – our human decision. Indeed, it seems to be reserved to the people of this country and this century, by our conduct and example, to decide also this important question.

Should we allow or encourage the initiation and growth of human life in the laboratory? This question, in one form or another, has been an issue for public policy for nearly a decade, even before the birth of the first test-tube baby in the summer of 1978. Back in 1975, after prolonged deliberations, the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research issued its report and recommendations for research on the human fetus. The Secretary of Health, Education, and Welfare (HEW) then published regulations regarding research, development, and related activities involving fetuses, pregnant women, and in vitro fertilization. These provided that no federal monies should be used for in vitro fertilization of human eggs until a special Ethics Advisory Board reviewed the ethical issues and offered advice about whether government should support any such proposed research. Perhaps for the first time in the modern era of biomedical research, public deliberation and debate about ethical matters led to an effective moratorium on federal support for experimentation – in this case, for research on human in vitro fertilization.

A few years later, the whole matter once again became the subject of intense policy debate, when an Ethics Advisory Board was established to consider whether the United States government should finance research on human life in the laboratory. The question had been placed on the policy table by a research proposal submitted to the National Institute of Child Health and Human Development by Dr. Pierre Soupart of Vanderbilt University. Dr. Soupart requested $465,000 for a study to define in part the genetic risk involved in obtaining early human embryos by tissue-culture methods. He proposed to fertilize about 450 human ova, obtained from donors undergoing gynecological surgery (i.e., not from women whom the research could be expected to help), with donor sperm, to observe their development for five to six days, and to examine them microscopically for chromosomal and other abnormalities before discarding them. In addition, he proposed to study whether such laboratory-grown embryos could be frozen and stored without introducing abnormalities, for it was thought that temporary cold storage of human embryos might improve the success rate in the subsequent embryo-transfer procedure used to produce a child. Though Dr. Soupart did not then propose to perform embryo transfers for women seeking to become pregnant, his research was intended to serve that goal: He hoped to reassure us that baby-making with the help of in vitro fertilization was safe, and he sought to perfect the techniques of laboratory growth of human embryos introduced by Drs. Robert Edwards and Patrick Steptoe in England.

Dr. Soupart’s application was approved for funding by the National Institutes of Health in October 1977, but because of the administrative regulations, it could not be funded without review by an Ethics Advisory Board. The then secretary of HEW, Joseph Califano, constituted such a board, and charged it not only with a decision on the Soupart proposal, but with an inquiry into all the scientific, ethical, and legal issues involved, urging it “to provide recommendations on broad principles to guide the Department in future decision-making.” After six months of public hearings all over the United States and another six months of private deliberation, the board issued its report in 1979, recommending that research funding be permitted for some in vitro experimentation – including the sort proposed by Dr. Soupart. But no secretary of health and human services – then or thereafter – has been willing to act on that recommendation. In fact, Dr. Soupart died in 1981 without having received a clear answer from the government. Thus, we still have no definite policy regarding our question: Should we allow or encourage the initiation and growth of human life in the laboratory?

The meaning of the question

How should one think about such ethical questions, here and in general? There are many possible ways, and it is not altogether clear which way is best. For some people, ethical issues are immediately matters of right and wrong, of purity and sin, of good and evil. For others, the critical terms are benefits and harms, risks and promises, gains and costs. Some will focus on so-called rights of individuals or groups (e.g., a right to life or childbirth); still others will emphasize so-called goods for society and its members, such as the advancement of knowledge and the prevention and cure of disease. My own orientation here is somewhat different. I wish to suggest that before deciding what to do, one should try to understand the implications of doing or not doing. The first task, it seems to me, is not to ask “moral or immoral?” or “right or wrong?” but to try to understand fully the meaning and significance of the proposed actions.

This concern with significance leads me to take a broad view of the matter. For we are concerned here not only with some limited research project of the sort proposed by Dr. Soupart, and the narrow issues of safety and informed consent it immediately raises; we are concerned also with a whole range of implications, including many that are tied to foreseeable consequences of this research and its predictable extensions – and touching even our common conception of our own humanity. As most of us are at least tacitly aware, more is at stake than in ordinary biomedical research or in experimenting with human subjects at risk of bodily harm. At stake is the idea of the humanness of our human life and the meaning of our embodiment, our sexual being, and our relation to ancestors and descendants. In thinking about necessarily particular and immediate decisions, say, for example, regarding Dr. Soupart’s research, we must be mindful of the larger picture and must avoid the great danger of trivializing the matter for the sake of rendering it manageable.

The status of extracorporeal life

The meaning of “life in the laboratory” turns in part on the nature and meaning of the human embryo, isolated in the laboratory and separate from the confines of a woman’s body. What is the status of a fertilized human egg (i.e., a human zygote) and the embryo that develops from it? How are we to regard its being? How are we to regard it morally (i.e., how are we to behave toward it)? These are, alas, all too familiar questions. At least analogous, if not identical, questions are central to the abortion controversy and are also crucial in considering whether and what sort of experimentation is properly conducted on living but aborted fetuses. Would that it were possible to say that the matter is simple and obvious, and that it has been resolved to everyone’s satisfaction!

But the controversy about the morality of abortion continues to rage and divide our nation. Moreover, many who favor or who do not oppose abortion do so despite the fact that they regard the previable fetus as a living human organism, even if less worthy of protection than a woman’s desire not to give it birth. Almost everyone senses the importance of this matter for the decision about laboratory culture of and experimentation with human embryos. Thus, we are obliged to take up the question of the status of the embryo in our search for the outlines of some common ground on which many of us can stand. To the best of my knowledge, the discussion that follows is not informed by any particular sectarian or religious teaching, though it may perhaps reveal that I am a person not devoid of reverence and the capacity for awe and wonder, said by some to be the core of the religious sentiment.

I begin by noting that the circumstances of laboratory-grown blastocysts (i.e., three-to-six-day-old embryos) and embryos are not identical with those of the analogous cases of (1) living fetuses facing abortion and (2) living aborted fetuses used in research. First, the fetuses whose fates are at issue in abortion are unwanted, usually the result of so-called accidental conception. Here, the embryos are wanted, and deliberately created, despite certain knowledge that many of them will be destroyed or discarded. Moreover, the fate of these embryos is not in conflict with the wishes, interests, or alleged rights of the pregnant women. Second, though the federal guidelines governing fetal research permit studies conducted on the not-at-all viable aborted fetus, such research merely takes advantage of available “products” of abortions not themselves undertaken for the sake of the research. No one has proposed and no one would sanction the deliberate production of live fetuses to be aborted for the sake of research, even very beneficial research. 1 In contrast, we are here considering the deliberate production of embryos for the express purpose of experimentation.

The cases may also differ in other ways. Given the present state of the art, the largest embryo under discussion is the blastocyst, a spherical, relatively undifferentiated mass of cells, barely visible to the naked eye. In appearance, it does not look human; indeed, only the most careful scrutiny by the most experienced scientist might distinguish it from similar blastocysts of other mammals. If the human zygote and blastocyst are more like the animal zygote and blastocyst than they are like the 12-week-old human fetus (which already has a humanoid appearance, differentiated organs, and electrical activity of the brain), then there would be a much diminished ethical dilemma regarding their deliberate creation and experimental use. Needless to say, there are articulate and passionate defenders of all points of view. Let us try, however, to consider the matter afresh.

First of all, the zygote and early embryonic stages are clearly alive. They metabolize, respire, and respond to changes in the environment; they grow and divide. Second, though not yet organized into distinctive parts or organs, the blastocyst is an organic whole, self-developing, genetically unique and distinct from the egg and sperm whose union marked the beginning of its career as a discrete, unfolding being. While the egg and sperm are alive as cells, something new and alive in a different sense comes into being with fertilization. The truth of this is unaffected by the fact that fertilization takes time and is not an instantaneous event. For after fertilization is complete, there exists a new individual, with its unique genetic identity, fully potent for the self-initiated development into a mature human being, if circumstances are cooperative. Though there is some sense in which the lives of egg and sperm are continuous with the life of the new organism (or, in human terms, that the parents live on in the child-to-be [or child]), in the decisive sense there is a discontinuity, a new beginning, with fertilization. After fertilization, there is continuity of subsequent development, even if the locus of the new living being alters with implantation (or birth). Any honest biologist must be impressed by these facts, and must be inclined, at least on first glance, to the view that a human life begins at fertilization. 2 Even Dr. Robert Edwards had apparently stumbled over this truth, perhaps inadvertently, in his remark about Louise Brown, his first successful test-tube baby: “The last time I saw her, she was just eight cells in a test-tube. She was beautiful then, and she’s still beautiful now!”

Granting that a human life begins at fertilization, and comes to be via a continuous process thereafter, surely – one might say – the blastocyst itself can hardly be considered a human being. I myself would agree that a blastocyst is not, in a full sense, a human being – or what the current fashion calls, rather arbitrarily and without clear definition, a person. It does not look like a human being nor can it do very much of what human beings do. Yet, at the same time, I must acknowledge that the human blastocyst is (1) human in origin and (2) potentially a mature human being, if all goes well. This, too, is beyond dispute; indeed it is precisely because of its peculiarly human potentialities that people propose to study it rather than the embryos of other mammals. The human blastocyst, even the human blastocyst in vitro, is not humanly nothing; it possesses a power to become what everyone will agree is a human being.

Here it may be objected that the blastocyst in vitro has today no such power, because there is now no in vitro way to bring the blastocyst to that much later fetal stage in which it might survive on its own. There are no published reports of culture of human embryos past the blastocyst stage (though this has been reported for mice.) The in vitro blastocyst, like the 12-week-old aborted fetus, is in this sense not viable (i.e., it is at a stage of maturation before the stage of possible independent existence). But if we distinguish, among the not-viable embryos, between the previable and the not-at-all viable – on the basis that the former, though not yet viable, is capable of becoming or being made viable – we note a crucial difference between the blastocyst and the twelve-week-old abortus. Unlike an aborted fetus, the blastocyst is possibly salvageable, and hence potentially viable, if it is transferred to a woman for implantation. It is not strictly true that the in vitro blastocyst is necessarily not viable. Until proven otherwise, by embryo transfer and attempted implantation, we are right to consider the human blastocyst in vitro as potentially a human being and, in this respect, not fundamentally different from a blastocyst in utero. To put the matter more forcefully, the blastocyst in vitro is more viable, in the sense of more salvageable, than aborted fetuses at most later stages, up to say 20 weeks.

This is not to say that such a blastocyst is therefore endowed with a so-called right to life, that failure to implant it is negligent homicide, or that experimental touchings of such blastocysts constitute assault and battery. (I myself tend to reject such claims, and indeed think that the ethical questions are not best posed in terms of rights.) But the blastocyst is not nothing; it is at least potential humanity, and as such it elicits, or ought to elicit, our feelings of awe and respect. In the blastocyst, even in the zygote, we face a mysterious and awesome power, a power governed by an immanent plan that may produce an indisputably and fully human being. It deserves our respect not because it has rights or claims or sentience (which it does not have at this stage), but because of what it is, now and prospectively.

Let us test this provisional conclusion by considering intuitively our response to two possible fates of such zygotes, blastocysts, and early embryos. First, should such an embryo die, will we be inclined to mourn its passing? When a woman we know miscarries, we are sad – largely for her loss and disappointment, but perhaps also at the premature death of a life that might have been. But we do not mourn the departed fetus, nor do we seek ritually to dispose of the remains. In this respect, we do not treat even the fetus as fully one of us.

On the other hand, we would, I suppose, recoil even from the thought, let alone the practice – I apologize for forcing it upon the reader – of eating such embryos, should someone discover that they would provide a great delicacy, a “human caviar.” The human blastocyst would be protected by our taboo against cannibalism, which insists on the humanness of human flesh and does not permit us to treat even the flesh of the dead as if it were mere meat. The human embryo is not mere meat; it is not just stuff; it is not a “thing.”3 Because of its origin and because of its capacity, it commands a higher respect.

How much more respect? As much as for a fully developed human being? My own inclination is to say probably not, but who can be certain? Indeed, there might be prudential and reasonable grounds for an affirmative answer, partly because the presumption of ignorance ought to err in the direction of never underestimating the basis for respect of human life (not least, for our own self-respect), partly because so many people feel very strongly that even the blastocyst is protectably human. As a first approximation, I would analogize the early embryo in vitro to the early embryo in utero (because both are potentially viable and human). On this ground alone, the most sensible policy is to treat the early embryo as a previable fetus, with constraints imposed on early embryo research at least as great as those on fetal research.

To some this may seem excessively scrupulous. They will argue for the importance of the absence of distinctive humanoid appearance or the absence of sentience. To be sure, we would feel more restraint in invasive procedures conducted on a five-month-old or even a twelve-week-old living fetus than on a blastocyst. But this added restraint on inflicting suffering on a look-alike, feeling creature in no way denies the propriety of a prior restraint, grounded in respect for individuated, living, potential humanity. Before I would be persuaded to treat early embryos differently from later ones, I would insist on the establishment of a reasonably clear, naturally grounded boundary that would separate “early” and “late,” and on the provision of a basis for respecting the “early” less than the “late.” This burden must be accepted by proponents of experimentation with human embryos in vitro if a decision to permit the creation of embryos for such experimentation is to be treated as ethically responsible.

The treatment of extracorporeal embryos

Where does the above analysis lead in thinking about treatment of human embryos in the laboratory? I indicate, very briefly, the lines toward a possible policy, though that is not my major intent.

The in vitro fertilized embryo has four possible fates: (1) implantation, in the hope of producing from it a child; (2) death, by active killing or disaggregation, or by a “natural” demise; (3) use in manipulative experimentation – embryological, genetic, etc.; and (4) use in attempts at perpetuation in vitro, beyond the blastocyst stage, ultimately, perhaps, to viability. Let us consider each in turn.

On the strength of my analysis of the status of the embryo, and the respect due it, no objection would be raised to implantation. in vitro fertilization and embryo transfer to treat infertility, as in the case of Mr. and Mrs. Brown, is perfectly compatible with a respect and reverence for human life, including potential human life. Moreover, no disrespect is intended or practiced by the mere fact that several eggs are removed to increase the chance of success. Were it possible to guarantee successful fertilization and normal growth with a single egg, no more would need to be obtained. Assuming nothing further is done with the unimplanted embryos, there is nothing disrespectful going on. The demise of the unimplanted embryos would be analogous to the loss of numerous embryos wasted in the normal in vivo attempts to generate a child. It is estimated that over 50 percent of eggs successfully fertilized during unprotected sexual intercourse fail to implant, or do not remain implanted, in the uterine wall, and are shed soon thereafter, before a diagnosis of pregnancy could be made. Any couple attempting to conceive a child tacitly accepts such embryonic wastage as the perfectly acceptable price to be paid for the birth of a (usually) healthy child. Current procedures to initiate pregnancy with laboratory fertilization thus differ from the natural process in that what would normally be spread over four or five months in vivo is compressed into a single effort, using all at once a four or five months’ supply of eggs.4

Parenthetically, we should note that the natural occurrence of embryo and fetal loss and wastage does not necessarily or automatically justify all deliberate, humanly caused destruction of fetal life. For example, the natural loss of embryos in early pregnancy cannot in itself be a warrant for deliberately aborting them or for invasively experimenting on them in vitro, any more than stillbirths could be a justification for newborn infanticide. There are many things that happen naturally that we ought not do deliberately. It is curious how the same people who deny the relevance of nature as a guide for evaluating human interventions into human generation, and who deny that the term “unnatural” carries any ethical weight, will themselves appeal to “nature’s way” when it suits their purposes.5 Still, in this present matter, the closeness to natural procreation – the goal is the same, the embryonic loss is unavoidable and not desired, and the amount of loss is similar – leads me to believe that we do no more intentional or unjustified harm in one case than in the other and practice no disrespect.

But must we allow the unimplanted in vitro embryos to die? Why should they not be either transferred for adoption into another infertile woman, or else used for investigative purposes, to seek new knowledge, say about gene action? The first option raises questions about lineage and the nature of parenthood to which I will return. But even on first glance, it would seem likely to raise a large objection from the original couple who were seeking a child of their own, and not the dissemination of their biological children for prenatal adoption.

But what about experimentation on such blastocysts and early embryos? Is that compatible with the respect they deserve? This is the hard question. On balance, I would think not. Invasive and manipulative experiments involving such embryos very likely presume that they are things or mere stuff and deny the fact of their possible viability. Certain observational and noninvasive experiments might be different. But on the whole, I would think that the respect for human embryos for which I have argued – I repeat, not their so-called right to life – would lead one to oppose most potentially interesting and useful experimentation. This is a dilemma, but one which cannot be ducked or defined away. Either we accept certain great restrictions on the permissible uses of human embryos or we deliberately decide to override – though I hope not deny – the respect due to the embryos.

I am aware that I have pointed toward a seemingly paradoxical conclusion about the treatment of the unimplanted embryos: Leave them alone, and do not create embryos for experimentation only. To let them die naturally would be the most respectful course, grounded on a reverence, generically, for their potential humanity, and a respect, individually, for their being the seed and offspring of a particular couple, who were themselves seeking only to have a child of their own. An analysis that stressed a right to life, rather than respect, would, of course, lead to different conclusions. Only an analysis of the status of the embryo that denies both its so-called rights or its worthiness of all respect would have no trouble sanctioning its use in investigative research, donation to other couples, commercial transactions, and other activities of these sorts.

I have to this point ignored the fourth and future fate of life in the laboratory, perpetuation in the bottle beyond the blastocyst state, ultimately, perhaps, to viability. As a practical matter, this repugnant Huxleyan prospect probably need not concern us much for the time being. But as a thought experiment, it permits us to test further our intuitions about the meaning of life in the laboratory and to discover thereby the limitations of the previous analysis. For these unimplanted and cultivated embryos raise even more profound difficulties. Bad as it may now be to discard or experiment upon them in these primordial stages, it will be far worse once we learn how to perpetuate them to later stages in their laboratory existence – especially when the technology arrives that can bring them to viability in vitro. For how long and up to what stage of development will they be considered fit material for experimentation? When ought they to be released from the machinery and admitted into the human fraternity, or, at least, into the premature nursery? The need for a respectable boundary defining protectable human life cannot be overstated. The current boundaries, gerrymandered for the sake of abortion – namely, birth or viability – may now satisfy both women’s liberation and the United States Supreme Court and may someday satisfy even a future pope, but they will not survive the coming of more sophisticated technologies for growing life in the laboratory.6

But what if perpetuation in the laboratory were to be sought not for the sake of experimentation but in order to produce a healthy living child – say, one with all the benefits of a scientifically based gestational nourishment and care? Would such treatment of a laboratory-grown embryo be compatible with the respect it is owed? If we consider only what is owed to its vitality and potential humanity as an individuated human being, then the laboratory growth of an embryo into a viable full-term baby (i.e., ectogenesis) would be perfectly compatible with the requisite respect. (Indeed, for these reasons one would guess that the right-to-life people, who object even to the destruction of blastocysts, would find infinitely preferable any form of their preservation and perpetuation to term, in the bottle if necessary.) But the practice of ectogenesis would be incompatible with the further respect owed to our humanity on account of the bounds of lineage, kinship, and descent. To be human means not only to have human form and powers; it means also to have a human context and to be humanly connected. The navel, no less than speech and the upright posture, is a mark of our being. It is for these sorts of reasons that we find the Brave New World’s Hatcheries dehumanizing.

Assisted by these reflections on the futuristic prospect of ectogenesis, we return for a closer look at the present practices of implantation. For just as the laboratory is not a fitting home for nascent human life, so, too, some human homes are more appropriate than others.

Lineage and parenthood, embodiment and gender

Many people rejoiced at the birth of Louise Brown. Some were pleased by the technical accomplishment, many were pleased that she was born apparently in good health. But most of us shared the joy of her parents, who after a long, frustrating, and fruitless period, at last had the pleasure and blessing of a child of their own. (Curiously, the perspective of the child was largely ignored. It will thus be easier to come at the matter of lineage by looking at it first from the side of the progenitors rather than the descendants.) The desire to have a child of one’s own is acknowledged to be a powerful and deep-seated human desire – some have called it instinctive – and the satisfaction of this desire, by the relief of infertility, is said to be one major goal of continuing work with in vitro fertilization and embryo transfer. That this is a worthy goal few, if any, would deny.

Yet let us explore what is meant by “to have a child of one’s own.” First, what is meant by “to have”? Is the crucial meaning that of gestating and bearing? Or is it to have as a possession? Or is it to nourish and to rear, the child being the embodiment of one’s activity as teacher and guide? Or is it rather to provide someone who descends and comes after, someone who will replace oneself in the family line or preserve the family tree by new sproutings and branchings, someone who will renew and perpetuate the vitality and aspiration of human life?

More significantly, what is meant by “one’s own”? What sense of one’s own is important? A scientist might define one’s own in terms of carrying one’s own genes. Though in some sense correct, this cannot be humanly decisive. For Mr. Brown or for most of us, it would not be a matter of indifference if the sperm used to fertilize the egg were provided by an identical twin brother – whose genes would be, of course, the same as his. Rather, the humanly crucial sense of one’s own, the sense that leads most people to choose their own, rather than to adopt, is captured in such phrases as “my seed,” “flesh of my flesh,” “sprung from my loins.” More accurately, since one’s own is not the own of one but of two, the desire to have a child of one’s own is a couple’s desire to embody, out of the conjugal union of their separate bodies, a child who is flesh of their separate flesh made one. This archaic language may sound quaint, but I would argue that this is precisely what is being celebrated by most people who rejoice at the birth of Louise Brown, whether they would articulate it this way or not. Mr. and Mrs. Brown, by the birth of their daughter, embody themselves in another, and thus fulfill this aspect of their separate sexual natures and of their married life together. They also acquire descendants and a new branch of their joined family tree. Correlatively, the child, Louise, is given solid and unambiguous roots from which she has sprung and by which she will be nourished.

If this were to be the only use made of embryo transfer, and if providing in this sense “a child of one’s own” were indeed the sole reason for the clinical use of the techniques, there could be no objection. Here indeed is the natural and proper home for the human embryo. Here indeed is the affirmation of transmission and the importance of lineage and connectedness. Yet there will almost certainly be – in fact, there already are – other uses, involving third parties, to satisfy the desire to have a child of one’s own in different senses of “to have” and “one’s own.” I am not merely speculating about future possibilities. With the technology to effect human in vitro fertilization and embryo transfer comes the immediate possibility of egg donation (egg from donor, sperm from husband), embryo donation (egg and sperm from outside of the marriage), and foster pregnancy (host surrogate for gestation). Clearly, the need for extramarital embryo transfers is real and probably large, probably eventually even greater than that for intramarital ones.

Nearly everyone agrees that these circumstances are morally and perhaps psychologically more complicated than the intramarital ones. The reasons touch the central core of gestation and generation. Here the meaning of one’s own is no longer so unambiguous; neither is the meaning of motherhood and the status of pregnancy. Indeed, one of the clearest meanings of having life in the laboratory is the rupture of the normally necessary umbilical connection between mother and child. This technical capacity to disrupt the connection has in fact been welcomed, curiously, for contradictory reasons. On the one hand, it is argued that embryo donation, or prenatal adoption, would be superior to present adoption, precisely because the woman would have the experience of pregnancy and the child would be born of the adopting mother, rendering the maternal tie that much closer. On the other hand, the mother-child bond rooted in pregnancy and delivery is held to be of little consequence by those who would endorse the use of surrogate gestational mothers, say for a woman whose infertility is due to uterine disease rather than ovarian disease or oviduct obstruction. But in both cases, the new techniques will serve not to ensure and preserve lineage, but rather to confound and complicate it. The principle truly at work in bringing life into the laboratory is not to provide married couples with a child of their own – or to provide a home of their own for children – but to provide a child to anyone who wants one, by whatever possible or convenient means.

So what?, it will be said. First of all, we already practice and encourage adoption. Second, we have permitted artificial insemination – though we have, after roughly 50 years of this practice, yet to resolve questions of legitimacy. Third, what with the high rate of divorce and remarriage, identification of mother, father, and child are already complicated. Fourth, there is a growing rate of illegitimacy and husbandless parentages. Fifth, the use of surrogate mothers for foster pregnancy is becoming widespread with the aid of artificial insemination. Finally, our age in its enlightenment is no longer so certain about the virtues of family, lineage, and heterosexuality, or even about the taboos against adultery and incest. Against this background, it will be asked, Why all the fuss about some little embryos that stray from their nest?

It is not an easy question to answer. Yet consider. We practice adoption because there are abandoned children who need good homes. We do not, and would not, encourage people deliberately to generate children for others to adopt, partly because we wish to avoid baby markets, partly because we think it unfair to deliberately deprive the child of his natural ties. Recent years have seen a rise in our concern with roots, against the rootless and increasingly homogeneous background of contemporary American life. Adopted children, in particular, are pressing for information regarding their biological parents, and some states now require this information to be made available (on that typically modern rationale of freedom of information, rather than because of the profound importance of lineage for self-identity). Even the importance of children’s ties to grandparents is being reasserted, as courts are granting visitation privileges to grandparents, over the objections of divorced-and-remarried former daughters- or sons-in-law. The practice of artificial insemination has yet to be evaluated, the secrecy in which it is practiced being an apparent concession to the dangers of publicity.7 Indeed, most physicians who practice artificial insemination (donor) routinely mix in some semen from the husband, to preserve some doubt about paternity – again, a concession to the importance of lineage and legitimacy. Finally, what about the changing mores of marriage, divorce, single-parent families, and sexual behavior? Do we applaud these changes? Do we want to contribute further to this confusion of thought, identity, and practice?8

Our society is dangerously close to losing its grip on the meaning of some fundamental aspects of human existence. In reviewing the problem of the disrespect shown to embryonic and fetal life in our efforts to master them, we noted a tendency – we shall meet it again shortly – to reduce certain aspects of human being to mere body, a tendency opposed most decisively in the nearly universal prohibition of cannibalism. Here, in noticing our growing casualness about marriage, legitimacy, kinship, and lineage, we discover how our individualistic and willful projects lead us to ignore the truths defended by the equally widespread prohibition of incest (especially parent-child incest). Properly understood, the largely universal taboo against incest, and also the prohibitions against adultery, defend the integrity of marriage, kinship, and especially the lines of origin and descent. These time-honored restraints implicitly teach that clarity about who your parents are, clarity in the lines of generation, clarity about who is whose, are the indispensable foundations of a sound family life, itself the sound foundation of civilized community. Clarity about your origins is crucial for self-identity, itself important for self-respect. It would be, in my view, deplorable public policy to erode further such fundamental beliefs, values, institutions, and practices. This means, concretely, no encouragement of embryo adoption or especially of surrogate pregnancy. While it would perhaps be foolish to try to proscribe or outlaw such practices, it would not be wise to support or foster them.

The existence of human life in the laboratory, outside the confines of the generating bodies from which it sprang, also challenges the meaning of our embodiment. People like Mr. and Mrs. Brown, who seek a child derived from their flesh, celebrate in so doing their self-identity with their own bodies and acknowledge the meaning of the living human body by following its pointings to its own perpetuation. For them, their bodies contain the seeds of their own self-transcendence and enable them to strike a blow for the enduring goodness of the life in which they participate. Affirming the gift of their embodied life, they show their gratitude by passing on that gift to their children. Only the body’s failure to serve the transmission of embodiment has led them – and only temporarily – to generate beyond its confines. But life in the laboratory also allows other people – including those who would donate or sell sperm, eggs, or embryos; or those who would bear another’s child in surrogate pregnancy; or even those who will prefer to have their children rationally manufactured entirely in the laboratory – to declare themselves independent of their bodies, in this ultimate liberation. For them the body is a mere tool, ideally an instrument of the conscious will, the sole repository of human dignity. Yet this blind assertion of will against our bodily nature – in contradiction of the meaning of the human generation it seeks to control – can only lead to self-degradation and dehumanization.

In this connection, the case of surrogate wombs bears a further comment. While expressing no objection to the practice of foster pregnancy itself, some people object that it will be done for pay, largely because of their fear that poor women will be exploited by such a practice. But if there were nothing wrong with foster pregnancy, what would be wrong with making a living at it? Clearly, this objection harbors a tacit understanding that to bear another’s child for pay is in some sense a degradation of oneself – in the same sense that prostitution is a degradation primarily because it entails the loveless surrender of one’s body to serve another’s lust, and only derivatively because the prostitute is paid. It is to deny the meaning and worth of one’s body to treat it as a mere incubator, divested of its human meaning. It is also to deny the meaning of the bonds among sexuality, love, and procreation. The buying and selling of human flesh and the dehumanized uses of the human body ought not to be encouraged. To be sure, the practice of womb donation could be engaged in for love rather than for money, as it apparently has been in some cases, including the original case in Michigan. A woman could bear her sister’s child out of sisterly love. But to the degree that she escapes in this way from the degradation and difficulties of the sale of human flesh and bodily services and the treating of the body as undignified stuff, once again she approaches instead the difficulties of incest and near incest.

To this point we have been examining the meaning of the presence of human life in the laboratory, but we have neglected the meaning of putting it there in the first place, that is, the meaning of extracorporeal fertilization as such. What is the significance of divorcing human generation from human sexuality, precisely for the meaning of our bodily natures as male and female, as both gendered and engendering? To be male or to be female derives its deepest meaning only in relation to the other, and therewith in the gender-mated prospects for generation through union. Our separated embodiment prevents us as lovers from attaining that complete fusion of souls that we as lovers seek; but the complementarity of gender provides a bodily means for transcending separateness through the children born of sexual union. As the navel is our bodily mark of lineage, pointing back to our ancestors, so our genitalia are the bodily mark of linkage, pointing ultimately forward to our descendants. Can these aspects of our being be fulfilled through the rationalized techniques of laboratory sexuality and fertilization? Does not the scientist-partner produce a triangle that somehow subverts the meaning of “two”? Even in the best of cases, do we not pay in coin of our humanity for electing to generate sexlessly?

Future prospects

Before proceeding to look at some questions of public policy, we need first to consider the likely future developments regarding human life in the laboratory. In my view, we must consider these prospects in reaching our decision about present policy. For, clearly, part of the meaning of what we are now doing consists in the things it will enable us sooner or later to do hereafter.

What can we expect for life in the laboratory, as an outgrowth of present studies? To be sure, prediction is difficult. One can never know with certainty what will happen, much less how soon. Yet uncertainty is not the same as simple ignorance. Some things, indeed, seem likely. They seem likely because (1) they are thought necessary or desirable, at least by some researchers and their sponsors; (2) they are probably biologically possible and technically feasible; and (3) they will be difficult to prevent or control (especially if no one anticipates their development or sees a need to worry about them). Wise policy makers will want to face up to reasonable projections of future accomplishments, consider whether they are cause for social concern, and see whether or not the principles now enunciated and the practices now established are adequate to deal with any such concerns. I project at least the following:

  1. The growth of human embryos in the laboratory will be extended beyond the blastocyst stage. Such growth must be deemed desirable under all the arguments advanced for developmental research up to the blastocyst stage; research on gene action, chromosome segregation, cellular and organic differentiation, fetus-environment interaction, implantation, etc., cannot answer all its questions with the blastocyst. Such in vitro postblastocyst differentiation has apparently been achieved in the mouse, in culture; the use of other mammals as temporary hosts for human embryos is also a possibility. How far such embryos will eventually be perpetuated is anybody’s guess, but full-term ectogenesis cannot be excluded. Neither can the existence of laboratories filled with many living human embryos, growing at various stages of development.
  2. Experiments will be undertaken to alter the cellular and genetic composition of these embryos, at first without subsequent transfer to a woman for gestation, perhaps later as a prelude to reproductive efforts. Again, scientific reasons now justifying research like Dr. Soupart’s already justify further embryonic manipulations, including formations of hybrids or chimeras (within species and between species); gene, chromosome, and plasmid insertion, excision, or alteration; nuclear transplantation or cloning; etc. The techniques of DNA recombination, coupled with the new skills of handling embryos, make prospects for some precise genetic manipulation much nearer than anyone would have guessed 10 years ago. And embryological and cellular research in mammals is making astounding progress. Not long ago the cover of Science featured a picture of a hexaparental mouse, born after reaggregation of an early embryo with cells disaggregated from three separate embryos. (Note: That sober journal called this a “handmade mouse” – literally a manu-factured mouse – and went on to say that it was “manufactured by genetic engineering techniques.”)
  3. Storage and banking of living human embryos (and ova) will be undertaken, perhaps commercially. After all, commercial sperm banks are already well established and prospering.

I can here do no more than identify a few kinds of questions that must be considered in relation to such possible coming control over human heredity and reproduction: questions about the wisdom required to engage in such practices; questions about the goals and standards that will guide our interventions; questions about changes in the concepts of being human, including embodiment, gender, love, lineage, identity, parenthood, and sexuality; questions about the responsibility of power over future generations; questions about awe, respect, humility; questions about the kind of society we will have if we follow along our present course.9

Though I cannot discuss these questions now, I can and must face a serious objection to considering them at all. Most people would agree that the projected possibilities raise far more serious questions than do simple fertilization of a few embryos, their growth in vitro to the blastocyst stage, and their subsequent use in experimentation or possible transfer to women for gestation. Why burden present policy with these possibilities? Future abuses, it is often said, do not disqualify present uses (though these same people also often say that “future benefits justify present practices, even questionable ones”). Moreover, there can be no certainty that A will lead to B. This thin-edge-of-the-wedge argument has been open to criticism.

But such criticism misses the point for two reasons. First, critics often misunderstand the wedge argument, which is not primarily an argument of prediction, that A will lead to B, say on the strength of the empirical analysis of precedent and an appraisal of the likely direction of present research. It is primarily an argument about the logic of justification. Do not the principles of justification now used to justify the current research proposal already justify in advance the further developments? Consider some of these principles:

  1. It is desirable to learn as much as possible about the processes of fertilization, growth, implantation, and differentiation of human embryos and about human gene expression and its control.
  2. It would be desirable to acquire improved techniques for enhancing conception, for preventing conception and implantation, for the treatment of genetic and chromosomal abnormalities, etc.
  3. In the end, only research using human embryos can answer these questions and provide these techniques.
  4. There should be no censorship or limitation of scientific inquiry or research.

This logic knows no boundary at the blastocyst stage, or, for that matter, at any later stage. For these principles not to justify future extensions of current work, some independent additional principles (e.g., a principle limiting such justification to particular stages of development) would have to be found. (Here, the task is to find such a biologically defensible distinction that could be respected as reasonable and not arbitrary, a difficult – perhaps impossible – task, given the continuity of development after fertilization.) Perhaps even more important than any present decision to encourage bringing human life into the laboratory will be the reasons given to support that decision. We will want to know precisely what grounds our policy makers will give for endorsing such research, and whether their principles have not already sanctioned future developments. If they do give such wedge-opening justifications, let them do so deliberately, candidly, and intentionally.

A better case to illustrate the wedge logic is the principle offered for the embryo-transfer procedure as treatment for infertility. Will we support the use of in vitro fertilization and embryo transfer because it provides a child of one’s own, in a strict sense of “one’s own,” to a married couple? Or will we support the transfer because it is treatment of involuntary infertility, which deserves treatment in or out of marriage, hence endorsing the use of any available technical means that would produce a healthy and normal child, including surrogate wombs, or even ectogenesis?

Second, logic aside, the opponents of the wedge argument do not counsel well. It would be simply foolish to ignore what might come next and to fail to make the best possible assessment of the implications of present action (or inaction). Let me put the matter very bluntly: The decisions we must now make may very well help to determine whether human beings will eventually be produced in laboratories. I say this not to shock – and I do not mean to beg the question of whether that would be desirable or not. I say this to make sure that we and our policy makers face squarely the full import and magnitude of this decision. Once the genies let the babies into the bottle, it may be impossible to get them out again.

The question of federal funding

So much, then, for the meanings of initiating, housing, and manipulating human embryos in the laboratory. We are now better prepared to consider the original practical question: Should we allow or encourage these activities? The foregoing reflections still make me doubt the wisdom of proceeding with these practices, both in research and in their clinical application, notwithstanding that valuable knowledge might be had by continuing the research and identifiable suffering might be alleviated by using it to circumvent infertility. To doubt the wisdom of going ahead makes one at least a fellow traveler of the opponents of such research, but it does not, either logically or practically, require that one join them in trying to prevent it, say by legal prohibition. Not every folly can or should be legislated against. Attempts at prohibition here would seem to be both ineffective and dangerous, ineffective because impossible to enforce, dangerous because the costs of such precedent-setting interference with scientific research might be greater than the harm it prevents. To be sure, we already have legal restrictions on experimentation with human subjects, restrictions that are manifestly not incompatible with the progress of medical science. Neither is it true that science cannot survive if it must take some direction from the law. Nor is it the case that all research, because it is research, is or should be absolutely protected. But it does not seem to me that in vitro fertilization and growth of human embryos or embryo transfer deserve, at least at present, to be treated as sufficiently dangerous for legislative interference.

But if to doubt the wisdom does not oblige one to seek to outlaw the folly, neither does a decision to permit require a decision to encourage or support. A researcher’s freedom to do in vitro fertilization, or a woman’s right to have a child with laboratory assistance, in no way implies a public (or even a private) obligation to pay for such research or treatment. A right against interference is not an entitlement for assistance. The question before the Department of Health and Human Services is not whether such research should be permitted or outlawed, but only whether the federal government should fund it. This is the policy question that needs to be discussed.

I propose to discuss it here, and at some length, not because it is itself timely or relatively important – it is neither – but because it is exemplary. Policy questions regarding controversial new biomedical technologies and practices – as well as other morally and politically charged matters on the border between private and public life (e.g., abortion, racial discrimination, developing the artificial heart, or affirmative action) – frequently take the form of arguments over federal support. Social control and direction of new developments is often given not in terms of yes or no, but rather, how much, how fast, or how soon? Thus, much of the present analysis can be generalized and made applicable to other specific developments in the field and to the field as a whole.

The arguments in favor of federal support are well known. First, the research is seen as continuous with, if not quite an ordinary instance of, the biomedical research that the federal government supports handsomely; roughly two-thirds of the money spent on biomedical research in the United States comes from Uncle Sam. Why is this research different from all other research? Its scientific merit has been attested to by the normal peer-review process of NIH. For some, that is a sufficient reason to support it.

Second, there are specific practical fruits expected from the anticipated successes of this new line of research. Besides relief for many cases of infertility, the research promises new birth-control measures based upon improved understanding of the mechanisms of fertilization and implantation, which in turn could lead to techniques for blocking these processes. Also, studies on early embryonic development hold forth the promise of learning how to prevent some congenital malformations and certain highly malignant tumors (e.g., hydatidiform mole) that derive from aberrant fetal tissue.

Third, as he who pays the piper calls the tune, federal support would make easy federal regulation and supervision of this research. For the government to abstain, so the argument runs, is to leave the control of research and clinical application in the hands of profit-hungry, adventurous, insensitive, reckless, or power-hungry private physicians, scientists, or drug companies, or, on the other hand, at the mercy of the vindictive, mindless, and superstitious civic groups that will interfere with this research through state and local legislation. Only through federal regulation – which, it is said, can only follow with federal funding – can we have reasonable, enforceable, and uniform guidelines.

Fourth is the chauvinistic argument that the United States should lead the way in this brave new research, especially as it will apparently be going forward in other nations. Indeed, one witness testifying before the Ethics Advisory Board deplored the fact that the first test-tube baby was British and not American, and complained, in effect, that the existing moratorium on federal support has already created what one might call an “in vitro fertilization gap.” The preeminence of American science and technology, so the argument implies, is the center of our preeminence among the nations, a position that will be jeopardized if we hang back out of fear.

Let me respond to these arguments, in reverse order. Conceding – even embracing – the premise of the importance of American science for American strength and prestige, it is far from clear that failure to support this research would jeopardize American science. Certainly the use of embryo transfer to overcome infertility, though a vital matter for the couples involved, is hardly a matter of vital national interest – at least not unless and until the majority of American women are similarly infertile. The demands of international competition, admittedly often a necessary evil, should be invoked only for things that really matter; a missile gap and an embryo-transfer gap are chasms apart. In areas not crucial to our own survival, there will be many things we should allow other nations to develop, if that is their wish, without feeling obliged to join them. Moreover, one should not rush into potential folly to avoid being the last to commit it.

The argument about governmental regulation has much to recommend it. But it fails to consider that there are other safeguards against recklessness, at least in the clinical applications, known to the high-minded as the canons of medical ethics and to the cynical as liability for malpractice. Also, federal regulations attached to federal funding will not in any case regulate research done with private monies, say by the drug companies. Moreover, there are enough concerned practitioners of these new arts who would have a compelling interest in regulating their own practice, if only to escape the wrath and interference of hostile citizen’s groups in response to unsavory goings-on. The available evidence does not convince me that a sensible practice of in vitro experimentation requires regulation by the federal government.

In turning to the argument about anticipated technological powers, we face difficult calculations of unpredictable and more-or-less likely costs and benefits, and the all-important questions of priorities in the allocation of scarce resources. Here it seems useful to consider separately the techniques for generating children and the anticipated techniques for birth control or for preventing developmental anomalies and malignancies.

First, accepting that providing a child of their own to infertile couples is a worthy goal – and it is both insensitive and illogical to cite the population problem as an argument for ignoring the problem of infertility – one can nevertheless question its rank relative to other goals of medical research. One can even wonder whether it is indeed a medical goal, or a worthy goal for medicine, that is, whether alleviating infertility, especially in this way, is part of the art of healing. Just as abortion for genetic defect is a peculiar innovation in medicine (or in preventive medicine) in which a disease is treated by eliminating the patient (or, if you prefer, a disease is prevented by “preventing” the patient), so laboratory fertilization is a peculiar treatment for oviduct obstruction in that it requires the creation of a new life to “heal” an existing one. All this simply emphasizes the uniqueness of the reproductive organs in that their proper function involves other people, and calls attention to the fact that infertility is not a disease, like heart disease or stroke, even though obstruction of a normally patent tube or vessel is the proximate cause of each.

However this may be, there is a more important objection to this approach to the problem. It represents yet another instance of our thoughtless preference for expensive, high-technology, therapy-oriented approaches to disease and dysfunctions. What about spending this money on discovering the causes of infertility? What about the prevention of tubal obstruction? We complain about rising medical costs, but we insist on the most spectacular and the most technological – and thereby often the most costly – remedies.

The truth is that we do know a little about the causes of tubal obstruction, though much less than we should or could. For instance, it is estimated that at least one-third of such cases are the aftermath of pelvic inflammatory disease, caused by that uninvited venereal guest, gonococcus. Leaving aside any question about whether it makes sense for a federally funded baby to be the wage of aphrodisiac indiscretion, one can only look with wonder at a society that will have “petri-dish babies”10 before it has found a vaccine against gonorrhea.

True, there are other causes of blocked oviducts, and blocked oviducts are not the only cause of female infertility. True, it is not logically necessary to choose between prevention and cure. But practically speaking, with money for research as limited as it is, research funds targeted for the relief of infertility should certainly go first to epidemiological and preventive measures – especially where the costs of success in the high-technology cure are likely to be great.

What about these costs? I have already explored some of the nonfinancial costs, in discussing the meaning of the research for our images of humanness. Let us, for now, consider only the financial costs. How expensive was Louise Brown? We do not know, partly because Drs. Edwards and Steptoe did not tell us how many failures preceded their success, how many procedures for egg removal and for fetal monitoring were performed on Mrs. Brown, and so on. To the costs of laparoscopy, fertilization and growth in vitro, and transfer, one must add the costs of monitoring the baby’s development to check on her “normality” and, should it come, the costs of governmental regulation. A conservative estimate might place the costs of a successful pregnancy of this kind to be between $5,000 and $10,000. If we use the conservative figure of 500,000 for estimating the number of infertile women with blocked oviducts in the United States whose only hope of having children lies in vitro fertilization,11 we reach a conservative estimate cost of $2.5 to $5 billion. Is it fiscally wise for the federal government to start down this road?

Clearly not, if it is also understood that the costs of providing the service, rendered possibly by a successful technology, will also be borne by the taxpayers. Nearly everyone now agrees that the kidney-machine legislation, obliging the federal government to pay an average of $25,000 to $30,000 per patient per year for kidney dialysis for anyone in need (costs to the taxpayers in 1983 were over $1 billion), is an impossible precedent – notwithstanding that individual lives have been prolonged as a result. But once the technique of in vitro fertilization and embryo transfer is developed and available, how should the baby-making be paid for? Should it be covered under medical insurance? If a national health insurance program is enacted, will and should these services be included? (Those who argue that they are part of medicine will have a hard time saying no.) Failure to do so will make this procedure available only to the well-to-do, on a fee-for-service basis. Would that be a fair alternative? Perhaps, but it is unlikely to be tolerated. Indeed, the principle of equality – equal access to equal levels of medical care – is the leading principle in the press for medical reform. One can be certain that efforts will be forthcoming to make this procedure available equally to all, independent of ability to pay, under Medicaid or national health insurance or in some other way. (Only a few years ago, an egalitarian Boston-based group concerned with infertility managed to obtain private funding to pay for artificial insemination for women on welfare!)

Much as I sympathize with the plight of infertile couples, I do not believe that they are entitled to the provision of a child at the public expense, especially now, especially at this cost, especially by a procedure that also involves so many moral difficulties. Given the many vexing dilemmas that will surely be spawned by laboratory-assisted reproduction, the federal government should not be misled by compassion to embark on this imprudent course.

In considering the federal funding of such research for its other anticipated technological benefits, independent of its clinical use in baby-making, we face a more difficult matter. In brief, as is the case with all basic research, one simply cannot predict what kinds of techniques and uses it will yield. But here, also, I think good sense would at present say that before one undertakes human in vitro fertilization to seek new methods of birth control (e.g., by developing antibodies to the human egg that would physically interfere with its fertilization) one should make adequate attempts to do this in animals. One simply can’t get sufficient numbers of human eggs to do this pioneering research well – at least not without subjecting countless women to additional risks not to their immediate benefit. Why not test this conceit first in the mouse or rabbit? Only if the results were very promising – and judged also to be relatively safe in practice – should one consider trying such things in humans. Likewise, the developmental research can and should be first carried out in animals, especially in primates. Though in vitro fertilization has yet to be achieved in monkeys, embryo transfer of in vivo fertilized eggs has been accomplished, thus permitting the relevant research to proceed. Purely on scientific grounds, the federal government ought not now to be investing its funds in this research for its promised technological benefits – benefits that, in the absence of pilot studies in animals, must be regarded as mere wishful thoughts in the imaginings of scientists.

There does remain, however, the first justification: research for the sake of knowledge itself – knowledge about cell cleavage, cell-cell and cell-environment interactions, and cell differentiation; knowledge of gene action and gene regulation; knowledge of the effects and mechanisms of action of various chemical and physical agents on growth and development; knowledge of the basic processes of fertilization and implantation. This is all knowledge worth having, and though much can be learned using animal sources – and these sources have barely begun to be sufficiently exploited – the investigation of these matters in man would, sooner or later, require the use of human embryonic material. Here, again, there are questions of research priority about which there is room for disagreement, among scientists and laymen alike. But there is also a more fundamental matter.

Is such research consistent with the ethical standards of our community? The question turns in large part on the status of the early human embryo. If, as I have argued, the early embryo is deserving of respect because of what it is, now and potentially, it is difficult to justify submitting it to invasive experiments, and especially difficult to justify creating it solely for the purpose of experimentation. The reader should test this conclusion against his or her reaction to imagining the Fertilizing Room of the Central London Hatchery or, more modestly, to encountering an incubator or refrigerator full of living embryos.

But even if this argument fails to sway our policy makers, another one should. For their decision, I remind you, is not whether in vitro fertilization should be permitted in the United States, but whether our tax dollars should encourage and foster it. One cannot, therefore, ignore the deeply held convictions of a sizable portion of our population – it may even be a majority on this issue – that regards the human embryo as protectable humanity, not to be experimented upon except for its own benefit. Never mind if these beliefs have a religious foundation – as if that should ever be a reason for dismissing them! The presence, sincerity, and depth of these beliefs, and the grave importance of their subject, is what must concern us. The holders of these beliefs have been very much alienated by the numerous court decisions and legislative enactments regarding abortion and research on fetuses. Many who by-and-large share their opinions about the humanity of prenatal life have with heavy heart gone along with the liberalization of abortion, out of deference to the wishes, desires, interests, or putative rights of pregnant women. But will they go along here with what they can only regard as gratuitous and willful assaults on human life, or at least on potential and salvageable human life, and on human dignity? We can ill afford to alienate them further, and it would be unstatesmanlike, to say the least, to do so, especially in a matter of so little importance to the national health and one so full of potential dangers.

Technological progress can be but one measure of our national health. Far more important is the affection and esteem in which our citizenry holds its laws and institutions. No amount of relieved infertility is worth the further disaffection and civil contention that the lifting of the moratorium on federal funding is likely to produce. People opposed to abortion and people grudgingly willing to permit women to obtain elective abortion but at their own expense will not tolerate having their tax money spent on scientific research requiring what they regard as at best cruelty, at worst murder. A wise secretary of health and human services should take this matter most seriously, and continue to refuse to lift the moratorium – at least until persuaded that the public will give its overwhelmingly support. Imprudence in this matter may be the worst sin of all.

An afterword

This has been for me a long and difficult exposition. Many of the arguments are hard to make. It is hard to get confident people to face unpleasant future prospects. It is hard to ask people to take seriously such “soft” matters as lineage, identity, respect, and self-respect when they are in tension with such “hard” matters as a cure for infertility or new methods of contraception. It is hard to claim respect for human life in the laboratory in a society that does not respect human life in the womb. It is hard to talk about the meaning of sexuality and embodiment in a culture that treats sex increasingly as sport and has trivialized gender, marriage, and procreation. It is hard to oppose federal funding of baby-making in a society that increasingly expects the federal government to satisfy all demands, and that – contrary to so much evidence of waste, incompetence, and corruption – continues to believe that only Uncle Sam can do it. And, finally, it is hard to speak about restraint in a culture that seems to venerate very little above man’s own attempt to master all. Here, I am afraid, is the biggest question about the reasonableness of the desire to become masters and possessors of nature, human nature included.

Here we approach the deepest meaning of in vitro fertilization. Those who have likened it to artificial insemination are only partly correct. With in vitro fertilization, the human embryo emerges for the first time from the natural darkness and privacy of its mother’s womb, where it is hidden away in mystery, into the bright light and utter publicity of the scientist’s laboratory, where it will be treated with unswerving rationality, before the clever and shameless eye of the mind and beneath the obedient and equally clever touch of the hand. What does it mean to hold the beginning of human life before your eyes, in your hands – even for five days (for the meaning does not depend on duration)? Perhaps the meaning is contained in the following story.

Long ago there was a man of great intellect and great courage. He was a remarkable man, a giant, able to answer questions that no other human being could answer, willing boldly to face any challenge or problem. He was a confident man, a masterful man. He saved his city from disaster and ruled it as a father rules his children, revered by all. But something was wrong in his city. A plague had fallen on generation; infertility afflicted plants, animals, and human beings. The man promised to uncover the cause of the plague and to cure the infertility. Resolutely, confidently, he put his sharp mind to work to solve the problem, to bring the dark things to light. No secrets, no reticences, a full public inquiry. He raged against the representatives of caution, moderation, prudence, and piety, who urged him to curtail his inquiry; he accused them of trying to usurp his rightfully earned power, to replace human and masterful control with submissive reverence. The story ends in tragedy: He solved the problem, but, in making visible and public the dark and intimate details of his origins, he ruined his life and that of his family. In the end, too late, he learns about the price of presumption, or overconfidence, of the overweening desire to master and control one’s fate. In symbolic rejection of his desire to look into everything, he punishes his eyes with self-inflicted blindness.

Sophocles seemed to suggest that such a man is always in principle – albeit unwittingly – a patricide, a regicide, and a practitioner of incest. These are the crimes of the tyrant, that misguided and vain seeker of self-sufficiency and full autonomy, who loathes being reminded of his dependence and neediness and who crushes all opposition to the assertion of his will, and whose incest is symbolic of his desire to be the godlike source of his own being. His character is his destiny.

We men of modern science may have something to learn from our philosophical forebear Oedipus. It appears that Oedipus, being the kind of man an Oedipus is (the chorus calls him a paradigm of man), had no choice but to learn through suffering. Is it really true that we, too, have no other choice?

  1. Though perhaps a justifiable exception would be a universal plague that fatally attacked all fetuses in utero. To find a cure for the end of the species may entail deliberately “producing” (and aborting) live fetuses for research.
  2. The truth of this is not decisively affected by the fact that the early embryo may soon divide and give rise to identical twins or by the fact that scientists may disaggregate and reassemble the cells of the early embryos, even mixing in cells from different embryos in the reaggregation. These unusual and artificial cases do not affect the natural norm, or the truth that a human life begins with fertilization – and does so always, if nothing abnormal occurs.
  3. Some people have suggested that the embryo be regarded in the same manner as a vital organ, salvaged from a newly dead corpse, usable for transplantation or research, and that its donation by egg and sperm donors be governed by the Uniform Anatomical Gift Act, which legitimates premortem consent for organ donation upon death. But though this acknowledges that embryos are not “things,” it is a mistake to treat embryos as mere organs, thereby overlooking that they are early stages of a complete, whole human being. The Uniform Anatomical Gift Act does not apply to, nor should it be stretched to cover, donation of gonads, gametes (male sperm or female eggs) or – especially – zygotes and embryos.
  4. There is a good chance that the problem of surplus embryos may be avoidable, for purely technical reasons. Some researchers believe that the uterine receptivity to the transferred embryo might be reduced during the particular menstrual cycle in which the ova are obtained because of the effects of the hormones given to induce superovulation. They propose that the harvested eggs be frozen and then defrosted one at a time each month for fertilization, culture, and transfer, until pregnancy is achieved. By refusing to fertilize all the eggs at once – not placing all one’s eggs in one uterine cycle – there will not be surplus embryos, but at most only surplus eggs. This change in the procedure would make the demise of unimplanted embryos exactly analogous to the “natural” embryonic loss in ordinary reproduction.
  5. The literature on intervention in reproduction is both confused and confusing on the crucial matter of the meaning of “nature” or “the natural” and their significance for the ethical issues. It may be as much a mistake to claim that the natural has no moral force as to suggest that the natural way is best, because natural. Though shallow and slippery thought about nature, and its relation to “good,” is a likely source of these confusions, the nature of nature may itself be elusive, making it difficult for even careful thought to capture what is natural.
  6. In Roe v. Wade, the Supreme Court ruled that state action regarding abortion was unconstitutional in the first trimester of pregnancy, permissible after the first trimester in order to promote the health of the mother, and permissible in order to protect “potential life” only at viability (about 24 weeks), prior to which time the state’s interest in fetal life was deemed not “compelling.” This rather careless and arbitrary placement of boundaries is already something of an embarrassment, thanks to growing knowledge about fetal development and, especially, sophisticated procedures for performing surgery on the intrauterine fetus – even in the second trimester. Also, because viability is, in part, a matter of available outside support, technical advances – such as an artificial placenta or even less spectacular improvements in sustaining premature infants – will reveal that viability is a movable boundary and that development is a continuum without clear natural discontinuities.
  7. There are today numerous suits pending, throughout the United States, because of artificial insemination with donor semen (AID). Following divorce, the ex-husbands are refusing child support for AID children, claiming, minimally, no paternity, or maximally, that the child was the fruit of an adulterous “union.” In fact, a few states still treat AID as adultery. The importance of anonymity is revealed in the following bizarre case: A woman wanted to have a child, but abhorred the thought of marriage or of sexual relations with men. She learned a do-it-yourself technique of artificial insemination and persuaded a male acquaintance to donate his semen. Now some 10 years after this virgin birth, the case has gone to court. The semen donor is suing for visitation privileges, to see his son.
  8. To those who point out that the bond between sexuality and procreation has already been permanently cleaved by the pill, and that this is therefore an idle worry in the case of in vitro fertilization, it must be said that the pill – like earlier forms of contraception – provides only sex without babies. Babies without sex is the truly unprecedented and radical departure.
  9. Some of these questions were addressed, albeit only briefly, in an earlier article (“Making Babies,” The Public Interest, Number 26, Winter 1972). It has been pointed out to me by an astute colleague that the tone of the present piece is less passionate and more accommodating than the first, a change he regards as an ironic demonstration of the inexorable way in which we get used to, and accept, our technological nightmares. I myself share his concern. I cannot decide whether the decline of my passion is to be welcomed, that is, whether it is due to greater understanding bred of more thought and experience, or to greater callousness and the contempt of familiarity bred from too much thought and experience. Adaptiveness is our glory and our curse: As Raskolnikov put it, “Man gets used to everything, the beast!”
  10. There has been much objection, largely from the scientific community, to the phrase, “test-tube baby.” More than one commentator has deplored the exploitation of its “flesh-creeping” connotations. They point out that a flat petri-dish is used, not a test tube – as if that mattered – and that the embryo spends only a few days in the dish. But they don’t ask why the term “test-tube baby” remains the popular designation, and whether it does not embody more of the deeper truth than a more accurate, laboratory appellation. If the decisive difference is between “in the womb” or “in the lab,” the popular designation conveys it (see “An afterword” below). And it is right on target, and puts us on notice, if the justification for the present laboratory procedures tacitly also justifies future extensions, including full ectogenesis, say, if that were the only way a wombless woman could have a child of her own, without renting a human womb from a surrogate bearer.
  11. This figure is calculated from estimates that between 10 and 15 percent of all couples are involuntarily infertile, and that in more than half of these cases the cause is in the female. Blocked oviducts account for perhaps 20 percent of thecauses of female infertility. Perhaps 50 percent of these women might be helped to have a child by means of reconstructive surgery on the oviducts; the remainder could conceive only with the aid of laboratory fertilization and embryo transfer. These estimates do not include additional candidates with uterine disease (who could “conceive” only by embryo transfer to surrogate-gestators), nor those with ovarian dysfunction who would need egg donation as well, nor that growing population of women who have had tubal ligations and who could later turn to in vitro fertilization. It is also worth noting that not all the infertile couples are childless; indeed, a surprising number are seeking to enlarge an existing family.

Leon Kass, M.D., is the Addie Clark Harding Professor in the Committee on Social Thought and the College at the University of Chicago and the Roger and Susan Hertog Fellow at the American Enterprise Institute. President Bush has appointed him to chair the President’s Council on Bioethics.