Cyber Ethics Question & Answer 1
Cyber Ethics

part1:
Questions:
1. (8 points) What exactly, is cyber ethics? How is it different from and similar to Computer Ethics, information ethics, and Internet ethics?
2. (4 points) What is meant by the term cyber technology? How is it similar to and different from computer technology?
3. (8 points) Identify and briefly describe some key aspects of each of the “four phases” in the evolution of cyber ethics as a field of applied ethics.
4. (6 points) Explain what Moor means by the expressions “logical malleability,” “policy vacuum,” and “conceptual muddle.”
5. (4 points) What is applied ethics and how is it different from theoretical ethics?
Make sure to:

1) copy and paste the questions to a word document.
2) keep questions in bold font.
3) Answers should be in regular (not bold) font. each answer should follow its question.
part2:
List and critically analyze some ethical concerns that arise in the Amy Boyer cyberstalking incident. What actions, if any, can/should ISPs be expected to take to avoid future online stalking incidents that might result in a stalkee’s death? Explain.
What Is Cyberethics?

Cyberethics is the study of moral, legal, and social issues involving cybertechnology.
As a field of applied ethics, it:
examines the impact that cybertechnology has for our social, legal, and moral systems.
evaluates the social policies and laws that we frame in response to issues generated by the development and use of cybertechnology.
Cybertechnology refers to a wide range of computing and communications devices
– from standalone computers, to “connected” or networked computing and communications technologies, to the Internet itself.
Cybertechnologies include (but are not limited to): digital hand-held devices (including PDSa);
networked computers (desktops and laptops);
stand-alone computers.
Cybertechnology (Continued)
Networked devices can be connected directly to the Internet.
They also can be connected to other devices through one or more privately owned computer networks.
Privately owned networks include both: Local Area Networks (LANs),
Wide Area Networks (WANs).
Why the term cyberethics?
Cyberethics is a more accurate label than computer ethics, which can suggest the study of ethical issues limited either to:
computing machines,
computing professionals.
Cyberethics is also more accurate than Internet ethics, which is limited only to ethical issues affecting computer networks.
The Evolution of Cybertechnology and Cyberethics: Four Phases
Computer technology emerged in the late 1940s, when some analysts confidently predicted that no more than six computers would ever need to be built.
The first phase of computing technology (1950s and 1960s) consisted mainly of huge mainframe computers that were unconnected (i.e., stand-alone machines).
One ethical/social question that arose during Phase 1 dealt with the impact of computing machines as “giant brains” and what that meant for being human.
Another question raised during this phase concerned privacy threats and the fear of Big Brother.
The Evolution of Cybertechnology and Cyberethics (Continued)
In Phase 2 (1970s and 1980s), computing machines and communications devices began to converge.
Mainframe computers and personal computers could be linked together via privately owned networks such as LANs and WANs.
Privacy concerns arose because confidential information could easily be exchanged between networked databases.
Intellectual property issues emerged because personal computers could easily duplicate proprietary software programs.
Computer crime was possible because people could break into the computers of large organizations.
The Evolution of Cybertechnology and Cyberethics (Continued)
During Phase 3 (1990-present), the availability of Internet access to the general public has increased significantly.
This has been facilitated by the phenomenal growth of the World Wide Web.
The proliferation of Internet- and Web-based technologies in this phase has raised ethical and social concerns affecting:
free speech,
anonymity,
jurisdiction.
The Evolution of Cybertechnology and Cyberethics (Continued)
As cybertechnology evolves in Phase 4, computers will likely become more and more a part of who or what we are as human beings.
James Moor (2005) notes that computing devices will soon be a part of our clothing, and even our bodies.
Computers are already becoming ubiquitous, and are beginning to “pervade” both our work and recreational environments.
Objects in these environments already exhibit what Philip Brey (2005) calls “ambient intelligence,” which enables “smart objects” to be connected to one another via wireless technology.
The Evolution of Cybertechnology and Cyberethics (Continued)
In Phase 4, computers are becoming less visible as distinct entities, as they: (a) continue to be miniaturized and integrated into
ordinary objects,
(b) blend unobtrusively into our surroundings.
Cybertechnology is also becoming less distinguishable from other technologies as boundaries that have previously separated them begin to blur because of convergence.
Table 1-1: Summary of Four
Phases of Cyberethics

Phase Time Period Technological Features Associated Issues
1 1950s-1960s Stand-alone machines (large mainframe computers)
Artificial intelligence (AI), database privacy (“Big Brother”)
2 1970s-1980s Minicomputers and PCs interconnected via privately owned networks
Issues from Phase 1 plus concerns involving intellectual property and software piracy, computer crime, privacy and the exchange of records.
3 1990s-Present Internet and World Wide Web Issues from Phases 1 and 2 plus concerns about free speech, anonymity, legal jurisdiction, virtual communities, etc.
4 Present to
Near Future
Convergence of information and communication technologies with nanotechnology research and bioinformatics research, etc.
Issues from Phases 1-3 plus concerns about artificial electronic agents (“bots”) with decision-making capabilities, bionic chip implants, nanocomputing research, etc.
Are Any Cyberethics Issues Unique Ethical Issues?
Consider “The “Washingtonienne” scenario (in the textbook) involving Jessica Cutler.
The scenario raises several interesting ethical issues – from anonymity expectations to privacy concerns to free speech, etc.
But are any ethical issues raised in this scenario, or in blogging cases in general, unique ethical issues?
Are Any Cyberethics Issues Unique (Continued)?
Review the Verizon v. RIAA scenario (described in the textbook) in light of the ethical issues that arise.
The ethical issues in this scenario include concerns about privacy, anonymity, surveillance, and intellectual property rights.
Are any of these issues new or unique ethical issues?
Are Any Cyberethics Issues Unique (Continued)?
Review the Amy Boyer cyberstalking scenario (described in the textbook).
Is there anything new or unique, from an ethical point of view, about the ethical issues that emerge in this scenario?
On the one hand, Boyer was stalked in ways that were not possible in the pre-Internet era.
But are any new or any unique ethical issues generated in this scenario?
Debate about the Uniqueness of Cyberethics Issues (Continued)
There are two points of view on whether cybertechnology has generated any new or unique ethical issues: (1) Traditionalists argue that nothing is new –
crime is crime, and murder is murder.
(2) Uniqueness Proponents argue that cybertechnology has introduced (at least some) new and unique ethical issues that could not have existed before computers.
The Uniqueness Debate (Continued)
Both sides seem correct on some claims, and both seem to be wrong on others.
Traditionalists underestimate the role that issues of scale and scope that apply because of the impact of computer technology. E.g., cyberstalkers can stalk multiple victims
simultaneously (scale) and globally (because of the scope or reach of the Internet).
Cyberstalkers can also operate without ever having to leave the comfort of their homes.
The Uniqueness Debate (Continued)
Those who defend the Uniqueness thesis tend to overstate the effect that cybertechnology has on ethics per se.
Walter Maner (2004) correctly points out that computers are uniquely fast, uniquely malleable, etc.
So, there may indeed be some unique aspects of computer technology.
The Uniqueness Debate (Continued)
Proponents of the uniqueness thesis tend to confuse unique features of tcomputer echnology with unique ethical issues.
Their argument is based on a logical fallacy: Premise. Cybertechnology has some unique technological
features.
Premise. Cybertechnology generates some ethical issues. ___________________________________________________________________________________
Conclusion. (At least some of the) Ethical issues generated by cybertechnology must be unique.
The Uniqueness Debate (Continued)
Traditionalists and uniqueness advocates are each partly correct. Traditionalists correctly point out that no new
ethical issues have been introduced by computers.
Uniqueness proponents are correct in that cybertechnology has complicated our analysis of traditional ethical issues.
The Uniqueness Debate (Continued)
So we must distinguish between any: (a) unique technological features;
(b) (alleged) unique ethical issues.
Consider two scenarios described in the textbook: (1) computer professionals designing the software
code for a controversial computer system;
(2) users making unauthorized copies of software.
Alternative Strategy for Analyzing the Uniqueness Issue
James Moor (2000) argues that computer technology generates “new possibilities for human action” because computers are logically malleable. Logical malleability in computers makes possible
new kinds of behavior for humans that introduce policy vacuums.
Policy vacuums cannot easily be filled because of conceptual muddles.
Conceptual muddles need to be clarified before clear policies can be formulated and justified.
A Policy Vacuum in Duplicating Software
Review the scenario (in the textbook) involving a policy vacuum for laws affecting the duplication of software.
In the early 1980s, there were no clear laws regarding the duplication of software programs, which was made easy because of personal computers. Because there were no clear rules for copying programs, a
policy vacuum arose.
Before the policy vacuum could be filled, a conceptual muddle had to be elucidated: What exactly is software?
Cyberethics as a Branch of Applied Ethics
Applied ethics, unlike theoretical ethics, examines “practical” ethical issues.
It analyzes moral issues from the vantage-point of one or more ethical theories.
Ethicists working in fields of applied ethics are more interested in applying ethical theories to the analysis of specific moral problems than in debating the ethical theories themselves.
Cyberethics as a Branch of Applied Ethics (continued)
Three distinct perspectives of applied ethics (as applied to cyberethics):
Professional Ethics;
Philosophical Ethics;
Sociological/Descriptive Ethics.
Perspective # 1: Cyberethics as a Branch of Professional Ethics
According to this view, the purpose of cyberethics is to identify and analyze issues of ethical responsibility for computer/ information technology (IT)professionals.
Consider a computer professional’s role in designing, developing, and maintaining computer hardware and software systems.
Suppose a programmer discovers that a software product she has been working on is about to be released for sale to the public, even though it is unreliable because it contains “buggy” software.
Should she “blow the whistle”?
Professional Ethics
Don Gotterbarn (1995) has suggested that computer ethics issues are professional ethics issues.
Computer ethics, for Gotterbarn, is similar to medical ethics and legal ethics, which are tied to issues involving specific professions.
He notes that computer ethics issues aren’t about technology per se. E.g., we don’t have automobile ethics, airplane
ethics, etc.
Some Criticisms of the Professional Ethics Perspective
Is Gotterbarn’s model for computer ethics too narrow for cyberethics?
Consider that cyberethics issues affect not only computer professionals; they effect evirtually veryone.
Before the widespread use of the Internet, Gotterbarn’s professional- ethics model may have been adequate.
Perspective # 2: Philosophical Ethics
From this perspective, cyberethics is a field of philosophical analysis and inquiry that goes beyond professional ethics.
Moor (2000) defines computer ethics as:
…the analysis of the nature and social impact of computer technology and the corresponding formulation and justification of policies for the ethical use of such technology. [Italics Added.]
Philosophical Ethics Perspective (continued)
Moor argues that automobile and airplane technologies did not affect our social policies and norms in the same kinds of fundamental ways that computer technology has.
Automobile and airplane technologies have revolutionized transportation, resulting in our ability to travel faster and farther than was possible in previous eras.
But they did not have the same impact on our legal and moral systems as cybertechnology.
Philosophical Ethics: Standard Model of Applied Ethics
Philip Brey (2004) describes the “standard methodology” used by philosophers in applied ethics research as having three stages: 1) Identify a particular controversial practice as a
moral problem.
2) Describe and analyze the problem by clarifying concepts and examining the factual data associated with that problem.
3)Apply moral theories and principles to reach a position about the particular moral issue.
Perspective #3: Cyberethics as a Field of Sociological/Descriptive Ethics
The professional and philosophical perspectives both illustrate normative inquiries into applied ethics issues.
Normative inquiries or studies are contrasted with descriptive studies. Descriptive investigations report about “What is
the case.“
Normative inquiries evaluate situations from the vantage-point of the question: “What ought to be the case?”.
Sociological/Descriptive Ethics Perspective (continued)
Review the scenario in the textbook re the impact of the introduction of a new technology on a community’s workforce. Suppose that a new technology, Technology X,
displaces 8,000 workers in Community Y.
If we analyze the issues solely in terms of the number of jobs that were gained or lost in that community, our investigation is essentially descriptive in nature.
Figure 1-1: Descriptive vs.
Normative Approaches
Descriptive Normative (Report or describe what is the case) (Prescribe what ought to be the case)
Non-moral Moral
Prescribe or evaluate
in matters involving
standards such as art and sports
(e.g., criteria for a good painting
or an outstanding athlete).
Prescribe or evaluate
in matters having to
do with fairness and
Obligation (e.g., criteria
for just and unjust
actions and policies).
Some Benefits of Using the Sociological/Descriptive Approach
Huff and Finholt (1994) claim that when we understand the descriptive aspect of social effects of technology, the normative ethical issues become clearer.
The descriptive perspective prepare us for our subsequent analysis of ethical issues that affect our system of policies and laws.
Table 1-2: Summary of
Cyberethics Perspectives
Type of Perspective Associated Disciplines
Issues Examined
Professional Computer Science Engineering
Library/Information Science
Professional Responsibility
System Reliability/Safety
Codes of Conduct
Philosophical Philosophy Law
Privacy & Anonymity
Intellectual Property
Free Speech
Sociological/Descriptive Sociology Behavioral Sciences
Impact of cybertechnology on governmental/financial/ educational institutions and socio-demographic groups
Is Cybertechnology Neutral?
Technology seems neutral, at least initially. Consider, for example, the cliché: “Guns don’t kill
people, people kill people.”
Corlann Gee Bush (2006) argues that gun technology, like all technologies, is biased in certain directions.
She points out that certain features inherent in gun technology itself cause guns to be biased in a direction towards violence in ways that other technologies are not.
Is Technology Neutral (continued)?
Bush uses an analogy from physics to illustrate the bias inherent in technology. E.g., she notes that when an atom either loses or
gains electrons through the ionization process, it becomes charged or valenced in a certain direction.
Bush notes that all technologies, including guns, are valenced in that they tend to “favor” certain directions rather than others.
Thus technology is biased and is not neutral.
A “Disclosive” Method for Cyberethics
Brey (2004) believes that because of embedded biases in cybertechnology, the standard applied-ethics methodology is not adequate for identifying cyberethics issues. E.g., he notes that we might fail to notice certain
features embedded in the design of cybertechnology.
Using the standard model, we might also fail to recognize that certain practices involving cybertechnology can have moral implications.
Disclosive Method (Continued)
Brey points out that one weakness of the “standard method of applied ethics” is that it tends to focus on known moral controversies
So, that model fails to identify practices involving cybertechnology which have moral implications but that are not yet known.
Brey refers to these practices as having morally opaque (or morally non-transparent) features, which he contrasts with “morally transparent” features.
Figure 1-2: Embedded Technological Features
Having Moral Implications
Known Features Unknown Features
Transparent Features Morally Opaque Features
Users are aware of
these features but do
not realize they have
moral implications.
Examples can
include:Web Forms
and search-
engine tools.
Users are not even
aware of the
technological features
that have moral
implications
Examples might
include data-mining
technology and
Internet cookies.
A Multi-Disciplinary and Multi-Level
Method for Cyberethics
Brey’s disclosive method is multidisciplinary because it requires the collaboration of: computer scientists,
philosophers,
social scientists.
A Multi-Disciplinary & Multi-Level Method
for Cyberethics (Continued)
Brey’s scheme is also multi-level because the method for conducting computer ethics research requires three levels of analysis, i.e., a: disclosure level,
theoretical level,
application level.
Table 1-3: Three Levels in Brey’s
Model of Computer Ethics
Disclosive Computer Science
Social Science (optional)
Disclose embedded features in computer technology that have moral import
Theoretical Philosophy Test newly disclosed features against standard ethical theories
Application Computer Science
Philosophy
Social Science
Apply standard or newly revised/ formulated ethical theories to the issues
Level Disciplines Involved Task/Function
A Three-step Strategy for
Approaching Cyberethics Issues
Step 1. Identify a practice involving cyber-technology, or a feature in that technology, that is controversial from
a moral perspective.
1a. Disclose any hidden (or opaque) features or issues that have moral implications
1b. If the ethical issue is descriptive, assess the sociological implications for relevant social
institutions and socio-demographic and populations.
1c. If the ethical issue is also normative, determine whether there are any specific guidelines, that
is, professional codes that can help you resolve the issue (see Appendixes A-E).
1d. If the normative ethical issues remain, go to Step 2.
Step 2. Analyze the ethical issue by clarifying concepts and situating it in a context.
2a. If a policy vacuums exists, go to Step 2b; otherwise go to Step 3.
2b. Clear up any conceptual muddles involving the policy vacuum and go to Step 3 .
Step 3. Deliberate on the ethical issue. The deliberation process requires two stages:
3a. Apply one or more ethical theories (see Chapter 2) to the analysis of the moral issue, and then
go to step 3b.
3b. Justify the position you reached by evaluating it against the rules for logic/critical thinking (see
Chapter 3).
Strategies for Answering Discussion and Essay Questions
Chapter 1
I: Strategies for selected Discussion and Essay Questions in Chapter 1 (See pages 30-31
in the textbook)
A. Strategies for Selected Discussion Questions in Chapter 1
4. Gotterbarn’s arguments are well constructed, and Gotterbarn makes a plausible case for
why computer ethics should be conceived of as field of professional ethics. In defense of
Gotterbarn’s position, we must concede that an understanding of professional-responsibility
issues is critical to an adequate understanding and analysis of computer ethics issues. In the
period of computing preceding the World Wide Web, Gotterbarn’s argument for why the field
of computer ethics should be conceived of as an area of professional ethics (whose primary
concern was for computer professionals) was plausible. At that time, many (if not most) of the
ethical issues involving computers directly affected the computing profession. Since then,
however, two important factors have influenced the field. First, a new (and broader) wave of
computer-related ethical issues emerged when the Internet became accessible to ordinary
people. Second, most people who currently use computers, either for work or for recreational
purposes, are not computer professionals. So, it would seem that we need a much broader
conception of the field than the one proposed by Gotterbarn. However, we also saw that
Gotterbarn’s view may ultimately turn out to be correct as more and more traditional
computer-ethics issues, such as those involving privacy, property, speech, and so forth,
become folded into “ordinary ethics.”
6. If we apply the first two-steps of the methodology described in Sec. 1.6 (entitled “A
Comprehensive Strategy for Approaching Cyberethics Issues”), we must first identify what
the ethical issues are in this case. Here, issues involving the personal privacy of Internet users
would seem to be apparent; so we have identified at least one ethical issue. Next, we need to
see whether there are any “policy vacuums” associated with this particular issue. There would
indeed seem to be a policy vacuum in the sense that neither ISPs nor ordinary users have a
clear understanding of a law or a policy for determining whether personal information about a
user’s online activities can be given to a non-law-enforcement organization merely because
some organization suspects one or more subscribers to an ISP of having violated copyright
laws. Consider that the RIAA is not a law enforcement agency. So, a policy vacuum
regarding cybertechnology once again emerges, and this vacuum or void needs to be filled
with a clear and explicit policy.
B. Strategies for Selected Essay Questions in Chapter 1
1. As we saw in this chapter of the textbook, a relatively recent practice that has generated
considerably controversy involved the way that the Recording Industry Association of
America (RIAA) pursued individuals it suspected of exchanging copyrighted music files on
the Internet. For example, we saw that the RIAA monitored the amount of “traffic” of
unauthorized music files that was routed through the computer systems of users’ systems. In
order for the RIAA to get the information it needed about the users who operated these
computer systems, the Recording Industry requested that ISPs provide to them the names of
individuals that corresponded to certain IP addresses.
Were the ISPs obligated, either legally or morally, to provide the RIAA with the actual names
of individuals, which are normally protected under the anonymity of IP addresses? On the one
hand, ISPs are required to comply with law enforcement authorities in cases where criminal
activities are suspected of taking place in their online forums. On the other hand, having P2P
software installed on one’s computer system is not in itself illegal. And even if having
unauthorized copyrighted music on one’s computer system is illegal, one could still question
whether the means used by the RIAA to track down suspected violators falls within the
parameters of what is morally (and even legally) acceptable behavior.
If we apply the first two-steps of the methodology described in Sec. 1.6 (entitled “A
Comprehensive Strategy for Approaching Cyberethics Issues”), we must first identify what
the ethical issues are in this case. Here, issues involving the personal privacy of Internet users
would seem to be apparent; so we have identified at least one ethical issue. Next, we need to
see whether there are any “policy vacuums” associated with this particular issue. There would
indeed seem to be a policy vacuum in the sense that neither ISPs nor ordinary users have a
clear understanding of a law or a policy for determining whether personal information about a
user’s online activities can be given to a non-law-enforcement organization merely because
some organization suspects one or more subscribers to an ISP of having violated copyright
laws. Consider that the RIAA is not a law enforcement agency. So, a policy vacuum
regarding cybertechnology once again emerges, and this vacuum or void needs to be filled
with a clear and explicit policy.
2. Theoretically, one could envision cyberethics issues being examined from an indefinite
number of perspectives. For example, we could conceivably analyze these issues from the
vantage point of political science, economics, history, and so forth. However, the three
perspectives that we have laid out in Chapter 1 are fairly exhaustive in their scope. Consider
that the professional-ethics perspective addresses many of the concerns faced by software and
hardware engineers, computer science professionals, and information-technology and library-
science professionals in their roles as professionals. The descriptive-ethics approach, which is
empirically-based, addresses concerns that are of interest to sociologists, anthropologists, and
psychologists (and thus indirectly addresses the interests of related groups such as political
scientists, economists, historians, and so forth). And the philosophical-ethics approach
examines cyberethics issues from a perspective that is both normative (as opposed to
descriptive) and conceptual/analytical (as opposed to empirical).
We saw that the Amy Boyer case was interesting because it could be analyzed from all three
perspectives. While not all cyberethics issues are analyzable from each of the three
perspectives, many are capable of being analyzed from two vantage points. For example, the
Napster case can be examined from the perspectives of descriptive ethics (i.e., its sociological
impact) and philosophical ethics (i.e., normative questions having to do with fairness and
moral obligation).
4. Some computer scientists have suggested that because: (a) Cybertechnology is relatively
new; (b) Cybertechnology raises ethical issues; it follows that (c) Cybertechnology has raised
new ethical issues. They point out, for example, that recent technologies such as data
encryption raise certain kinds of social and ethical concerns that did not exist prior to the
introduction of that technology.
Some computer scientists and philosophers have also argued that because computer
technology has had a global impact with respect to ethical issues, a new global (or universal)
theory of ethics is needed. In other words, Western ethical theories such as utilitarianism are
no longer adequate to deal with ethical issues that are global in scope.
Other philosophers have taken a slightly different tack to show why a new ethical framework
is needed. Philosophers Hans Jonas (2006) and Luciano Floridi (1999) have argued,
independently of one another, that modern technology has introduced new “objects of moral
consideration.” For example, recent work in the fields of artificial intelligence has contributed
to the creation of software robots (“softbots”); and developments at the intersection of
cybertechnology and biotechnology has made possible the creation of certain kinds of objects,
entities, or “creatures” that appear to be human-like. As a result, some philosophers argue that
we need to expand our sphere of moral consideration to include such objects and entities; and
in the process, we will also likely need a new ethical framework.
In our analysis of question of whether any computer ethics issues are unique (in Chapter 1),
however, we saw that our existing ethical framework is fairly robust. We also that if we make
certain adjustments to the standard, three-step method used in applied ethics, then we can
avoid having to construct a new ethical framework For example, if we include the
recommendations suggested by James Moor and Philip Brey, we can proceed with our
existing set of ethical theories. Following Brey, we would first need to identify and disclose
any embedded values in technology (i.e., either transparent or non-transparent moral values);
and following Moor, we need to identify and resolve any policy vacuums and conceptual
muddles that arise, before we can apply the traditional ethical theories.