(7) Environmental Law, Policies & Protection

VIDEO

Brooks Agnew: Death of Electrical Car, Future Science, State of Economy and Politics

PDEP

Environment and economics

Beyond expertise: the political (and popular) dimension of environmental protection

We have discussed two broad approaches to assessing environmental decisions, CBA and risk assessment. Such formal technical approaches to decision making are increasingly prevalent in environmental decision making.46

The imposition of technical approaches to decision making is one attempt to enhance the accountability of decision making; we can see whether and how the particular technical approach has been followed by the administration.47 Numerous examples of formalized approaches to risk assessment or CBA in government policy have appeared in recent years. The establishment of an Environment Agency in the United Kingdom put this on a statutory footing.48

Environment Act 1995, s. 39

1. Each new Agency-

(a) In considering whether or not to exercise any power conferred upon it by or under any enactment, or

(b) In deciding the manner in which to exercise any such power, shall, unless and to the extent that it is unreasonable for it to do so in view of the nature or purpose of the power or in the circumstances of the particular case, take into account the likely costs and benefits of the exercise or non-exercise of the power or its exercise in the manner in question.

However, these formal technical approaches can risk a dilution and misplacement of accountability, by providing an apparently inevitable and objective technical’ answer’ to what is actually a rather complex and normative/political question.

Accordingly, we see competing efforts to broaden the input into decision making, sometimes by enhancing public participation. It is only rarely now urged that environmental decisions should be taken on the basis solely of technical information, be that from the natural sciences, from risk assessment or from CBA. We could not explain this evolution by reference to a single event, but it is fair to say that the discovery of a link between bovine spongiform encephalopathy (BSE or ‘mad cow disease’) in cattle and new variant Creutzfeldt - Jakob disease (vCJD) in human beings49 led to a sea change in regulation in the United Kingdom and the EU. The United Kingdom government had focused on the safety of beef in a way that tended to suggest that transmissibility of BSE to humans was not possible. The public reaction to the resulting crisis contributed to greater political awareness of the difficulties of risk regulation, and considerable political attention has since been paid to uncertainty in the scientific process, to public perceptions of risks and to public values that fall outside traditional scientific assessments. A number of official reports brought these developments into the mainstream,50 and whilst the importance of science to the policy process is kept firmly in view, the trend is to emphasize a more open and participative approach to environmental decision making, and the significance of moral, social and ethical concerns alongside technical issues.

Sunstein writes from the United States, and ‘celebrates technocracy’; nevertheless, he ultimately returns to the political context of environmental decision making.

Sunstein, Risk, and Reason, p. 294

In many ways, this book has been a celebration of the centrality of science and expertise to the law of risk. Indeed, I have attempted to defend a highly technocratic approach to risk regulation, and given reasons to be sharply skeptical of populism, at least in this domain of the law. But I have also offered two objections to a purely technocratic approach to risk reduction.

First, ours is a deliberative democracy in which reflective public judgment plays a large role. Where judgments of value are to be made, they should be made by the citizenry, not

by experts. Some deaths are particularly bad, and these deserve unusual attention. It would indeed [be] obtuse to treat all risks as if they were the same, regardless of context and quality. People are right to insist that it matters whether a risk is voluntarily incurred. When it is especially easy to avoid certain risks, government should not spend a great deal of time and effort in response. People are also right to say that fair distribution of risk matters. Thus I have urged that as part of a cost-benefit analysis, it is important to know who would gain and who would lose – and that government legitimately seeks to minimize the burdens faced by the most disadvantaged members of society and to maximize the benefits that they receive.

...

Second, technocrats tend to ignore the fact that to work well, a regulatory system needs one thing above all – public support and confidence. This is so whether or not a lack of confidence would be fully rational. To the extent that government relies on statistical evidence alone, it is unlikely to promote its own goals. Partly this is because people will assess that evidence in light of their own motivations and their inevitably limited capacities.

Regulators who are alert to the importance of both confidence and trust will do what they can to provide information in a way that is well-tailored to how people think about risk - and that tries to educate people when their usual ways of thinking lead them astray. In some circumstances, an understanding of how people think will lead government toward approaches that technocrats will not have on their view screen. We might say that good technocrats need to know not only economics and science but psychology as well.

In this extract, Sunstein’s second objection to basing regulation on expert judgment alone is connected with the loss of public trust in science emanating from government (and industry); enhancing the openness of and participation in regulation is a very frequent prescription for such a loss of trust. Sunstein’s first objection is essentially about the role of public values in decision making.

A need to go beyond an isolated exercise of expertise rests fundamentally on the fact that environmental decisions can involve the most profound political questions and value judgments. They raise questions about the sort of world in which we want to live, and what we are prepared to pay for that world.

Questions such as the ethical implications of new technology or development, or the appropriate role of government in social life, can only problematically and with great uncertainty and controversy be captured by technical analysis.

Moreover, environmental regulation distributes burden and benefit, and distributional questions are at the heart of any ordinary approach to politics. Whilst economists are confident of their ability to measure ethical or distributional questions in monetary terms (or to add useful qualitative statements to a CBA), the political process seems better suited to capturing the nuances of environmental decisions. Experts, in whatever field, should have no monopoly on value judgments.

Just as values are necessarily implicated in environmental decision making, science cannot be successfully isolated from those values. The impossibility of undertaking wholly objective technical and scientific assessments is very broadly accepted. When assessments are made by practitioners of the natural sciences, or by risk assessors or economists, even in complete accord with the best professional practice, the values of the individual and the profession are likely to be imperceptibly introduced into the assessment process. Technical assessments are shaped by the values of the practitioners, and the judgments of the relevant profession.51 Assumptions will be made at every stage of the process, the accuracy or appropriateness of which is likely to be debated: estimates of the effect of a low, but perhaps lengthy, human exposure to a particular pollutant may be made on the basis of high exposure of laboratory animals; predictions will rely on numerous techniques requiring the exercise of professional judgment. And epidemiological studies, statistical analyses of human populations that relate degree of exposure to a risk, have their critics: ‘a good working definition of a catastrophe is an effect so large that even an epidemiological study can detect it’.52

Sally Eden, ‘Public Participation in Environmental Policy: Considering Scientific, Counter-Scientific, and Non-Scientific Contributions’ (1996) 5 Public Understanding of Science 183, p. 187

Perhaps the best (and best known) example of a global environmental issue influenced by science and brought to international policy attention is stratospheric ozone depletion, a problem derived from modernization (in this case, from new chemical compounds: chlorofluorocarbons or CFCs) and primarily constructed in terms of atmospheric chemistry: the observations are taken by a small number of specialists and then communicated to the public and other groups in the environmental debate. The globalized change is located in the results from teams like the British Antarctic Survey, not in the everyday experiences of members of the public. So, it is not too simplistic to say that without the science of atmospheric chemistry, we would not see any ozone problem. Moreover, science at first calculated out these ozone depletion measurements by regarding them as errors, only later to regard them as ‘facts’ once the techniques of measurement and its interpretation were (internally) changed, emphasizing science’s hold on the identification of the ‘problem’:

The debacle of the ‘hole’ in the ozone layer, undiscovered for so many years because its observers programmed their computer to ignore measurements that diverged too greatly from expected norms, notoriously proved how highly ‘interpretive’ such climatic experiments can be.53

The practitioners of the developing specialism of ‘atmospheric chemistry’ had to make judgments about the interpretation of their results. These judgments created room for new errors, here delaying discovery of the ‘hole’ in the ozone layer. Equally importantly, these judgments demonstrate how even the ‘best’ science involves the inevitable introduction of choices and values. The exercise of personal and professional judgment is crucial in day-to-day scientific activity, which in turn is crucial to environmental protection.

Judgment is also an essential element of the practice of CBA; at every stage from deciding whose costs and values count that is the scope of the ‘community of concern’, 54 to interpreting the results. For example, a survey of ‘willingness to pay’ (that is what individuals would pay to keep an environmental resource) provides consistently lower figures than a survey of ‘willingness to accept’ (that is the sum that would be accepted as compensation for the destruction of a resource); deciding which question to ask in these circumstances is quite blatantly not a neutral task. And every effort of the economist to adjust a calculation to reflect value issues, such as distributional or ethical concerns, involves an incorporation of non-economic criteria and professional judgment that undermines the apparent objectivity and inevitability of the exercise.

Mark Sagoff provides a classic and powerful criticism of economic understandings of environmental problems and the associated rise of CBA.55 In the process, he explores the political nature of environmental decision making, arguing that political value judgments, rather than economic calculations, are required to justify environmental decisions. It is not necessary to accept every part of Sagoff’s approach to environmental politics to agree that ‘political questions are not all economic’, 56 or that environmental regulation should be the product of rational discussion and debate, rather than economic analysis alone.

None of this is to say that technical analyses such as risk assessment or CBA cannot feed into that political process. And Cass Sunstein responds positively to the challenges, suggesting a range of ways in which the accepted limitations of CBA can be acknowledged through its practice.

Sunstein, Risk, and Reason, pp. 292-3

• The magnitude of costs and the magnitude of benefits are not all that matters. Distributional considerations are indeed relevant ...

• Cost-benefit analysis can give an illusion of precision, at least if existing knowledge does not permit us to specify benefits or costs. In these circumstances, the best approach is not to reject cost-benefit analysis, but to offer ranges, with a full appreciation of the possibility of uncertainty.

• Some people claim, rightly, that social goods are ‘incommensurable’, in the sense that we do not value all goods in the same way that we value money. A human life is not really equivalent to $6.1 million, or whatever economic amount we choose to spend to prevent a statistical death. Beaches and parks and wolves and seals are not reducible to their economic value. For this reason, cost-benefit analysis, of the sort that I have urged here, should include qualitative as well as quantitative descriptions of the consequences of regulation. We should not think that the monetary ‘bottom line’ is anything magical; it is simply a helpful input into the decision.

• Cost-benefit analysis does not respect ‘intuitive toxicology’, and for this reason it might seem to disregard people’s sense of risk and danger. The point is correct, but it is no objection. Policy should ordinarily be rooted in evidence, not baseless fear or unwarranted optimism.

• Cost-benefit analysis might seem to treat human lives cavalierly, simply because it places a monetary value on statistical risks. But any government is required to assign some no infinite value to statistical risks. It is best for government to be clear about what it is doing and why it is doing it. If the amounts are too low, then government is indeed treating lives cavalierly, and the amounts should be increased.

• Cost-benefit analysis might seem to give insufficient weight to the future and in particular to the interests of future generations. I have urged that a sensible cost-benefit analysis does indeed give weight to the future, though the selection of the appropriate discount rate raises many conundrums.

• Cost-benefit analysis might seem to be undemocratic, especially insofar as it allows policy to be set in large part by experts. I have argued that, on the contrary, cost-benefit analysis is an important tool for promoting democratic goals because it ensures that some account of the likely consequences of regulation will be placed before officials and the public at large. Experts are crucial to sensible policy simply because of their expertise. If public officials want to proceed even though the costs do not justify the benefits, they are permitted to do that, so long as they can generate a good reason for their decision.

• Cost-benefit analysis might be criticized insofar as it relies on private willingness to pay as the basis for calculating both costs and benefits. Sometimes people are poorly informed, and hence are willing to pay little for significant benefits. Sometimes people are unwilling to pay for certain goods simply because their preferences have adapted to the status quo, in which they face real deprivation. Sometimes private willingness to pay will understate benefits, if people are willing to pay more for a good if other people are going to be paying for them too. Poor people might have little willingness to pay simply because they have little ability to pay. In many contexts, these objections have force. There is no special magic in the idea of willingness to pay. I have suggested that government needs some numbers from which to begin its analysis, that private willingness to pay is a good start, but that government can depart from that number if the context shows a sensible reason to do so. Current practice shows considerable good sense on this count.

• Some people fear that, as a practical matter, cost-benefit analysis will simply paralyze government and prevent it from issuing regulations that would do more good than harm. If this is true, then the pragmatic argument for cost-benefit analysis has been defeated. Any effort to ensure cost-benefit balancing should ensure that it does not produce ‘paralysis by analyses. I have urged that the record suggests that cost-benefit balancing does not, in fact, produce paralysis.

• Cost-benefit analysis might be challenged as a form of centralized government planning, likely to overload government’s ability to compile the necessary information. The objection too has much force, especially in light of the fact that government’s own incentives are not always to be trusted. The simplest response to this objection relies on the absence of good alternatives. It is increasingly clear that environmental questions are political questions in the broadest sense, and, as such, they need to be resolved by the political processes in place in any particular society. A dilemma is, however, posed by the simultaneous imperatives of political decision making that responds to public values, and the usefulness or even necessity of technical information in a world of hard choices. A very common governmental response to this dilemma is to institute a division between risk assessment, a technical exercise, and risk management, a political exercise. The extract from Sunstein, in providing space for explaining decisions that do not fit within the results of the CBA, recognizes this distinction. The Royal Commission on Environmental Pollution have moved this debate beyond academia, providing practical routes by which a broader range of values might be fed into the process of standard setting.

Royal Commission on Environmental Pollution, Twenty-first Report, Setting Environmental Standards Cm 4053 (1998), p. 122

8.51 We have noted previously that a failure to make a clear separation between policy and analysis (which, in the environmental field, has predominantly been scientific analysis) ... has had a pernicious effect on trust in the quality and integrity of both expert advice and the decision taken. There are several reasons why a separation of the scientific assessment stage from the policy-making stage is essential. It is important that all the component analyses restrict themselves to setting out the information which will form the raw material of the decision, and do not attempt to displace that decision. Even in cases where the scientific assessment may appear to lead directly to the deliberative procedure from which a standard will emerge, there must always be some consideration of the practicality, cost, legality, and morality of the decision, however intuitive this consideration may be in practice. Rigour and accountability are better served if these considerations are kept explicit and distinct.

...

8.53 The knowledge provided by any single discipline is never sufficient to determine the precise level of a standard. By recommending that a distinction be made between analysis and policy making, we are not saying that scientists and other analysts are not qualified to exercise practical judgment, nor that they should not do so. We are suggesting that they should make it clear when they are speaking as scientists (or whatever) and when they are exercising practical judgment.

This distinction is commonly regarded as good practice at United Kingdom and EU level.57 According to Pfizer, when a Community institution seeks the opinion of a Community scientific advisor, it is not bound to accept the conclusions reached in that opinion, but to the extent that the Community institution disregards the opinion, ‘it must provide specific reasons for its findings by comparison with those made in the opinion and its statement of reasons must explain why it is disregarding the latter. The statement of reasons must be of a scientific level at least commensurate with that of the opinion in question.’58 scientific advisors are denied the final word because whilst they are expert bodies, they have neither democratic legitimacy nor political responsibilities. Decisions on ‘acceptable’ risk are political. Along similar lines, when Community institutions are required to assess complex facts of a technical or scientific nature, they can only adopt a preventive measure without consulting the relevant EU-level scientific committee in ‘exceptional situations’, and where there are otherwise adequate guarantees of scientific objectivity.59

The recognition of the political significance of risk management is broadly welcome. However, the distinction between the technical and the political is not as clear cut as the dichotomy between risk assessment and risk management might suggest. In particular, the language assumes that the technical risk assessment is value free and neutral, whilst, as observed above, technical assessments are full of value judgments and are shaped by regulatory context.60 There is a danger that entrenching the divisions between political decision makers and experts will leave the uncertainties and value judgments of the prior technical stage of risk assessment unexamined, although the emphasis generally seems to be on transparency at all stages. In addition, the late involvement of the policy makers means that they may be presented with very limited options; more generally, they may tend to prefer the scientific evidence in any event.

46. See for example Royal Commission on Environmental Pollution, Twenty-first Report, Setting Environmental Standards, Cm 4053 (1998).

47. Elizabeth Fisher, ‘Drowning by Numbers: The Pursuit of Accountable Public Administration’ (2000) 20 Oxford Journal of Legal Studies 109.

48. On the Environment Agency, see further Ch. 8, pp. 334–9.

49. See generally Gavin Little, ‘BSE and the Regulation of Risk’ (2001) 64 Modern Law Review 730.

50. Royal Commission on Environmental Pollution, above, n. 46; House of Lords Select Committee on Science and Technology, 3rd Report Session 2000–01, Science and Society HL 57; Cabinet Office, Risk: Improving Government’s Capacity to Handle Risk and Uncertainty (2002).

51. See, for example, Sheila Jasanoff, The Fifth Branch: Science Advisors as Policymakers (Harvard University Press, 1990); Kristen Schrader-Frechette, Risk and Rationality (University of California Press, 1991).

52. Dryzek, Politics of the Earth, p. 73 (quoting Aaron Wildavsky, But is it True? (Harvard University Press, 1995), p. 254, in turn quoting David Ozonoff).

53. Andrew Ross, ‘Is Global Culture Warming Up?’ (1991) 28 Social Text 18.

54. Graham Smith, Deliberative Democracy and the Environment (Routledge, 2003), pp. 39-45; see also Chris Hilson, ‘Greening Citizenship: Boundaries of Membership and the Environment’ (2001) 13 Journal of Environmental Law 335, discussing different values for non-use value of a stretch of river, depending on whether all of the water company’s customers were included, or just those within the affected catchment area.

55. The Economy of the Earth.

56. Sagoff, The Economy of the Earth, Ch. 2.

57. European Commission, Communication on the Precautionary Principle COM (2000) 1 final; DEFRA, Guidelines for Environmental Risk Assessment.

58. Pfizer, para. 199.

59. Alpharma, para. 213.

60 Les Levidow and Claire Marris, ‘Science and governance in Europe: lessons from the case of agricultural biotechnology’ (2001) Science and Public Policy 350

Joomla Templates and Joomla Extensions by ZooTemplate.Com
Share

Google+

googleplus sm

Search

Translate

ar bg ca zh-chs zh-cht cs da nl en et fi fr de el ht he hi hu id it ja ko lv lt no pl pt ro ru sk sl es sv th tr uk vi

Newsletter

Subscribe our Newsletter