Monday, August 30, 2010

History of Computer Ethics

A Very Short History of Computer Ethics
Terrell Ward Bynum
[This article was published in the Summer 2000 issue of the American Philosophical Association’s Newsletter on Philosophy and Computing]
The Foundation of Computer Ethics
Computer ethics as a field of study was founded by MIT professor Norbert Wiener during World War Two (early 1940s) while helping to develop an antiaircraft cannon capable of shooting down fast warplanes. One part of the cannon had to “perceive” and track an airplane, then calculate its likely trajectory and “talk” to another part of the cannon to fire the shells. The engineering challenge of this project caused Wiener and some colleagues to create a new branch of science, which Wiener called “cybernetics” – the science of information feedback systems. The concepts of cybernetics, when combined with the digital computers being created at that time, led Wiener to draw some remarkably insightful ethical conclusions. He perceptively foresaw revolutionary social and ethical consequences. In 1948, for example, in his book Cybernetics: or control and communication in the animal and the machine, he said the following:
It has long been clear to me that the modern ultra-rapid computing machine was in principle an ideal central nervous system to an apparatus for automatic control; and that its input and output need not be in the form of numbers or diagrams but might very well be, respectively, the readings of artificial sense organs, such as photoelectric cells or thermometers, and the performance of motors or solenoids.... we are already in a position to construct artificial machines of almost any degree of elaborateness of performance. Long before Nagasaki and the public awareness of the atomic bomb, it had occurred to me that we were here in the presence of another social potentiality of unheard-of importance for good and for evil. (pp. 27 – 28)
In 1950 Wiener published his monumental computer ethics book, The Human Use of Human Beings, which not only established him as the founder of computer ethics, but far more importantly, laid down a comprehensive computer ethics foundation which remains today – half a century later – a powerful basis for computer ethics research and analysis. (However, he did not use the name “computer ethics” to describe what he was doing.) His book includes (1) an account of the purpose of a human life, (2) four principles of justice, (3) a powerful method for doing applied ethics, (4) discussions of the fundamental questions of computer ethics, and (5) examples of key computer ethics topics. (Wiener 1950/1954, see also Bynum 1999)

Wiener made it clear that, on his view, the integration of computer technology into society will constitute the remaking of society – the “second industrial revolution” – destined to affect every major aspect of life. The computer revolution will be a multifaceted, ongoing process that will take decades of effort and will radically change everything. Such a vast undertaking will necessarily include a wide diversity of tasks and challenges. Workers must adjust to radical changes in the work place; governments must establish new laws and regulations; industry and business must create new policies and practices; professional organizations must develop new codes of conduct for their members; sociologists and psychologists must study and understand new social and psychological phenomena; and philosophers must rethink and redefine old social and ethical concepts.

Neglect, Then a Reawakening
Unfortunately, this complex and important new area of applied ethics, which Wiener founded in the 1940s, remained nearly undeveloped and unexplored until the mid 1960s. By then, important social and ethical consequences of computer technology had already become manifest, and interest in computer-related ethical issues began to grow. Computer-aided bank robberies and other crimes attracted the attention of Donn Parker, who wrote books and articles on computer crime and proposed to the Association for Computing Machinery that they adopt a code of ethics for their members. The ACM appointed Parker to head a committee to create such a code, which was adopted by that professional organization in 1973. (The ACM Code was revised in the early 1980s and again in the early 1990s.)

Also in the mid 1960s, computer-enabled invasions of privacy by “big-brother” government agencies became a public worry and led to books, articles, government studies, and proposed privacy legislation. By the mid 1970s, new privacy laws and computer crime laws had been enacted in America and in Europe, and organizations of computer professionals were adopting codes of conduct for their members. At the same time, MIT computer scientist Joseph Weizenbaum created a computer program called ELIZA, intended to crudely simulate “a Rogerian psychotherapist engaged in an initial interview with a patient.” Weizenbaum was appalled by the reaction that people had to his simple computer program. Some psychiatrists, for example, viewed his results as evidence that computers will soon provide automated psychotherapy; and certain students and staff at MIT even became emotionally involved with the computer and shared their intimate thoughts with it! Concerned by the ethical implications of such a response, Weizenbaum wrote the book Computer Power and Human Reason (1976), which is now considered a classic in computer ethics.

In 1976, while teaching a medical ethics course, Walter Maner noticed that, often, when computers are involved in medical ethics cases, new ethically important considerations arise. Further examination of this phenomenon convinced Maner that there is need for a separate branch of applied ethics, which he dubbed “computer ethics.” (Wiener had not used this term, nor was it in common use before Maner.) Maner defined computer ethics as that branch of applied ethics which studies ethical problems “aggravated, transformed or created by computer technology.” He developed a university course, traveled around America giving speeches and conducting workshops at conferences, and published A Starter Kit for Teaching Computer Ethics. By the early 1980s, the name “computer ethics” had caught on, and other scholars began to develop this “new” field of applied ethics.

Among those whom Maner inspired in 1978 was a workshop attendee, Terrell Ward Bynum (the present author). In 1979, Bynum developed curriculum materials and a university course, and in the early 1980s gave speeches and ran workshops at a variety of conferences across America. In 1983, as Editor of the journal Metaphilosophy, he launched an essay competition to generate interest in computer ethics and to create a special issue of the journal. In 1985, that special issue – entitled Computers and Ethics – was published; and it quickly became the widest-selling issue in the journal’s history. The lead article – and winner of the essay competition – was James Moor’s now-classic essay, “What Is Computer Ethics?.” where he described computer ethics like this:
A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, i.e., to formulate policies to guide our actions. Of course, some ethical situations confront us as individuals and some as a society. Computer ethics includes consideration of both personal and social policies for the ethical use of computer technology. (p. 266)
In Moor’s view computer ethics includes, (1) identification of computer-generated policy vacuums, (2) clarification of conceptual muddles, (3) formulation of policies for the use of computer technology, and (4) ethical justification of such policies.
A Standard-setting Textbook
1985 was a watershed year for computer ethics, not only because of the special issue of Metaphilosophy and Moor’s classic article, but also because Deborah Johnson published the first major textbook in the field (Computer Ethics), as well as an edited collection of readings with John Snapper (Ethical Issues in the Use of Computers). Johnson’s book Computer Ethics rapidly established itself as the standard-setting textbook in university courses, and it set the research agenda in computer ethics for nearly a decade.

In her book, Johnson defined computer ethics as a field which examines ways that computers “pose new versions of standard moral problems and moral dilemmas, exacerbating the old problems, and forcing us to apply ordinary moral norms in uncharted realms.” (p. 1) Unlike Maner (see Maner 1996), with whom she had discussed computer ethics in the late 1970s, Johnson did not think that computers created wholly new ethical problems, but rather gave a “new twist” to already familiar issues such as ownership, power, privacy and responsibility.

Exponential Growth
Since 1985, the field of computer ethics has grown exponentially. New university courses, research centers, conferences, articles and textbooks have appeared, and a wide diversity of additional scholars and topics have become involved. For example, thinkers like Donald Gotterbarn, Keith Miller, Simon Rogerson, and Dianne Martin – as well as organizations like Computer Professionals for Social Responsibility, the Electronic Frontier Foundation and ACM-SIGCAS – have spearheaded developments relevant to computing and professional responsibility. Developments in Europe and Australia have been especially noteworthy, including new research centers in England, Poland, Holland, and Italy; the ETHICOMP series of conferences led by Simon Rogerson and the present writer; the CEPE conferences founded by Jeroen van den Hoven; and the Australian Institute of Computer Ethics headed by John Weckert and Chris Simpson.
The Future of Computer Ethics?
Given the explosive growth of computer ethics during the past two decades, the field appears to have a very robust and significant future. How can it be, then, that two important thinkers – Krystyna Górniak-Kocikowska and Deborah Johnson – have recently argued that computer ethics will disappear as a branch of applied ethics?

The Górniak Hypothesis – In her 1995 ETHICOMP paper, Górniak predicted that computer ethics, which is currently considered just a branch of applied ethics, will eventually evolve into something much more. It will evolve into a system of global ethics applicable in every culture on earth:
Just as the major ethical theories of Bentham and Kant were developed in response to the printing press revolution, so a new ethical theory is likely to emerge from computer ethics in response to the computer revolution. The newly emerging field of information ethics, therefore, is much more important than even its founders and advocates believe. (p. 177)

The very nature of the Computer Revolution indicates that the ethic of the future will have a global character. It will be global in a spatial sense, since it will encompass the entire Globe. It will also be global in the sense that it will address the totality of human actions and relations. (p.179)

Computers do not know borders. Computer networks… have a truly global character. Hence, when we are talking about computer ethics, we are talking about the emerging global ethic. (p. 186)

…the rules of computer ethics, no matter how well thought through, will be ineffective unless respected by the vast majority of or maybe even all computer users. This means that in the future, the rules of computer ethics should be respected by the majority (or all) of the human inhabitants of the Earth.... In other words, computer ethics will become universal, it will be a global ethic. (p.187)
According to the Górniak hypothesis, “local” ethical theories like Europe’s Benthamite and Kantian systems and the ethical systems of other cultures in Asia, Africa, the Pacific Islands, etc., will eventually be superseded by a global ethics evolving from today’s computer ethics. “Computer” ethics, then, will become the “ordinary” ethics of the information age.

The Johnson Hypothesis – In her 1999 ETHICOMP paper, Deborah Johnson expressed a view which, upon first sight, may seem to be the same as Górniak’s:

I offer you a picture of computer ethics in which computer ethics as such disappears.... We will be able to say both that computer ethics has become ordinary ethics and that ordinary ethics has become computer ethics. (Pp. 17 – 18)

But a closer look at the Johnson hypothesis reveals that it is very different from Górniak’s. On Górniak’s view, the computer revolution will eventually lead to a new ethical system, global and cross-cultural in nature. The new “ethics for the information age,” according to Górniak, will supplant parochial theories like Bentham’s and Kant’s – theories based on relatively isolated cultures in Europe, Asia, Africa, and other “local” regions of the globe.

Johnson’s hypothesis, in reality, is essentially the opposite of Górniak’s. It is another way of stating Johnson’s often-defended view that computer ethics concerns “new species of generic moral problems.” It assumes that computer ethics, rather than replacing theories like Bentham’s and Kant’s, will continue to presuppose them. Current ethical theories and principles, according to Johnson, will remain the bedrock foundation of ethical thinking and analysis, and the computer revolution will not lead to a revolution in ethics.

At the dawn of the 21st century, then, computer ethics thinkers have offered the world two very different views of the likely ethical relevance of computer technology. The Wiener-Maner-Górniak point of view sees computer technology as ethically revolutionary, requiring human beings to reexamine the foundations of ethics and the very definition of a human life. The more conservative Johnson perspective is that fundamental ethical theories will remain unaffected – that computer ethics issues are simply the same old ethics questions with a new twist – and consequently computer ethics as a distinct branch of applied philosophy will ultimately disappear.
References
Terrell Ward Bynum, ed. (1985), Computers and Ethics, Basil Blackwell (published as the October 1985 issue of Metaphilosophy).

Terrell Ward Bynum (1999), “The Foundation of Computer Ethics,” a keynote address at the AICEC99 Conference, Melbourne, Australia, July 1999.

Krystyna Górniak-Kocikowska (1996), “The Computer Revolution and the Problem of Global Ethics” in Terrell Ward Bynum and Simon Rogerson, eds., Global Information Ethics, Opragen Publications, 1996, pp. 177 – 190, (the April 1996 issue of Science and Engineering Ethics)

Deborah G. Johnson (1985), Computer Ethics, Prentice-Hall. (Second Edition 1994).

Deborah G. Johnson (1999), “Computer Ethics in the 21st Century,” a keynote address at ETHICOMP99, Rome, Italy, October 1999.

Deborah G. Johnson and John W. Snapper, eds. (1985), Ethical Issues in the Use of Computers, Wadsworth.

Walter Maner (1978), Starter Kit on Teaching Computer Ethics (Self published in 1978. Republished in 1980 by Helvetia Press in cooperation with the National Information and Resource Center for Teaching Philosophy).

Maner, Walter (1996), “Unique Ethical Problems in Information Technology,” in Terrell Ward Bynum and Simon Rogerson, eds., Global Information Ethics, Opragen Publications, 1996, pp. 137 – 52, (the April 1996 issue of Science and Engineering Ethics).

James H. Moor (1985), “What Is Computer Ethics?” in Terrell Ward Bynum, ed. (1985), Computers and Ethics, Basil Blackwell, pp. 266 – 275.

Joseph Weizenbaum (1976), Computer Power and Human Reason: From Judgment to Calculation, Freeman.

Norbert Wiener (1948), Cybernetics: or Control and Communication in the Animal and the Machine, Technology Press.

Norbert Wiener (1950/1954), The Human Use of Human Beings: Cybernetics and Society, Houghton Mifflin, 1950. (Second Edition Revised, Doubleday Anchor, 1954. This later edition is better and more complete from a computer ethics point of view.)


No comments:

Post a Comment