Computer Ethics

From WebScience

Jump to: navigation, search

Fact Box
Module Foundations and Principles II
Christian Kohls
Credits 3
Term Term 1
Course is not required
Current course page Summer 2017
Active Yes

The Big Idea

In the context of designing web services, there can be many decisions that will impact the experience of the user and the societal consequences of the service. For example, social networking sites have not only raised concerns in terms of potentially sensitive information that people put online, but they have also significantly changed the way in which people interact. Typically, one wants a web service to achieve desirable effects, and one wants to prevent unwanted effects. However, such effects are often uncertain. To make the right design decisions, such effects should therefore be carefully analysed in the design process itself. Ethical theories, being concerned with questions on how we should live and act, provide a framework for thinking about such decisions, and as such they are an essential ingredient for the design of web services. This course will introduce such theories, as well as their application to web service design.

In the context of the module Foundations and Principles II, this can be seen as a complementary perspective to economical concerns, but one may also interpret ethics and economics as being inherently integrated (cf. CO2 emissions trading, where economic incentives are provided for “ethical” behaviour). In this course, you will get acquainted with different approaches to ethical design, the specific features of (digital) information and services, and your responsibility as a web professional.

Intended Learning Outcomes

After this course, you will be able to:

  1. Identify how a web service can impact users and society;
  2. Evaluate such effects in terms of desirability;
  3. Judge whether your contributions to specific developments are in line with professional ethical standards, as well as your own norms.

Structure of the Course

The course will address the following topics:

  1. ethics of technology;
  2. specific features of information and information ethics;
  3. professional responsibility.

Ethics of technology

The study of ethics is concerned with how we should act, or how we should live our lives. Determining what we should do in a particular case can be done from various perspectives: one can investigate the applicable rights and duties, one can assess the consequences of different options, or one may assess which virtues one needs to develop to do the right thing. Thus, one ethicist may say that you shall not lie because a society in which everyone would agree to lie would be impossible, another say that you shall not lie because it has bad consequences (and only if it has bad consequences), and again another may say that one needs to develop virtues that allow one to make suitable decisions on when to tell the truth, taking into account the intricacies of the situation. The first thing to do when studying ethics is familiarising yourself with these basics of the field. For example, you can have a look at as a starting point.

As we are speaking about information technology in this course, we are particularly interested in ethics of technology. In technology, many of the consequences of actions are indirect, in the sense that a designer makes a decision that much later may impact users or society. Besides, responsibility for such actions is often distributed: there is often not one person that clearly made a mistake, but rather a chain of events that eventually led to an undesirable outcome due to failure or misuse of the technology (see for example the inquiry into the sinking of the Herald of Free Enterprise ( In such cases, design decisions may lead to unforeseen consequences, especially when multiple design decisions and procedures interact. Alternatively, certain effects may be designed into the technology intentionally, such as when a car won’t start unless the seatbelts are fastened.

Undesirable effects of technology may occur in various areas. There may be effects on nature and health (oil pollution in Nigeria, asbestos-related lung cancer), but also on society and human behaviour. The first question to ask is what is valuable, what is worthy of protection, and what can indeed be damaged or improved by particular design decisions. Such values may be either judged to be intrinsic, of value in itself, or instrumental, of benefit to other entities, particularly humans. For example, we may wish to protect rainforests because they are worthy of protection as ecosystems, or because they have beneficial effects on climate or may provide future medicines.

In case of information technology, environmental and health effects do indeed occur (power consumption, repetitive strain injuries), but its applications are particularly noteworthy for changing society and human behaviour. Therefore, this course will particularly focus on such effects.

Many theories have been proposed in ethics of technology to discuss the impact of technological design on our experience, our behaviour and our lives. Such effects may be more complicated than one would say at first sight. For example, while the vacuum cleaner could be thought to save time, its introduction also strengthened hygiene norms, at least partially reversing this effect. To prove such effects one needs to study them empirically, but here we are concerned with early identification of potential effects in the design phase. Only when potential effects have been identified, their actual occurrence can be studied by empirical sciences.

To familiarise yourself with the theory on ethics of technology, please read the following articles:

In the lecture, we will discuss how such theories can be used to identify potential effects in the design phase.

Specific features of information

Knowledge is power. Whereas many technological effects are discussed in terms of climate change, biodiversity, and health, information technologies change the distribution of information, and thereby influence trust and power relations in society. For example, when the processing of election results is outsourced to a private company, the citizens now have to trust the company to calculate the result correctly, and, if implemented badly, the company will have full power over the result of the election. Or, with a badly designed website certification service, fake sites may be set up to spy on citizens (search the news for DigiNotar if you want to learn more).

In this part of the course, we will ask the question what information is, and how it is related to society. Specific norms and values for information will be discussed, such as privacy and non-discrimination.

When we use technology, we will have to rely upon certain properties of the devices. Properties that are often discussed include reliability and safety. But these are not merely properties of devices, humans play an important role in the effects of technology as well, for example by following or not following safety procedures, by using devices in different ways then intended, and by using devices for criminal, illegal, or otherwise morally problematic purposes. When intentional human action contributes to undesirable effects, we speak of security rather than safety. Examples include using airplanes for terrorist attacks, using computer networks to sabotage power or water supply, or using a database of phone numbers for sending unsolicited SMS messages. As information does not mean anything without someone to interpret it and use it for a particular purpose, security issues are especially prominent in information technologies.

Three properties of information and information systems are particularly relevant from a security point of view: confidentiality, integrity and availability, often abbreviated CIA. To prevent undesirable effects, one may want certain information (such as physical address) to be kept confidential, one may want certain information (such as salary databases) not to be manipulated, and one may want to prevent certain information (such as criminal records) from being deleted, or to prevent certain information systems (such as those that control power supply) from being disrupted by targeted attacks. The technical means for achieving these goals will be dealt with in the Web Trust & Security module. Here, we are interested in the reasons for protecting information.

Inevitably, certain actors will have the power to publish, alter, or destroy information. Other actors that rely on this information then have to trust the actor to act in the expected way. In this sense, information systems, by redistributing information, also redistribute power and trust in society. Such consequences can be (1) systematically identified, and (2) evaluated in terms of desirability.

Reading material:

In the lecture, we will discuss how to evaluate effects of web services based on these approaches.

Professional responsibility

Given that you can identify potential social effects of web technology, and evaluate these effects in terms of desirability, what is your role as a web scientist in using this knowledge? In other words, what is your professional responsibility for “good” system design?

Inevitably, your role as a professional will mean that you are expected to create designs that are not “flawed”, in the sense that they malfunction, or cause hazards. You are expected to have the system properly tested. But to what extent are you also responsible for the way in which your system can be used? You can expect to be held responsible if your design includes a security bug that allows malicious hackers to get access to the system, but what if you create a file sharing systems that is used by others to transfer copyrighted content? In an offline example: are gun users or gun producers responsible for the possibility of murder? Are tobacco manufacturers or tobacco smokers responsible for the high incidence of lung cancer? As often in ethics, both points of view can be defended, and many will say that the truth lies somewhere in the middle. The central question in this topic is how responsibility for undesirable effects is distributed over users and producers, in particular in the context of web services.

Reading material:

In the lecture, we will discuss the different possible opinions on the distribution of responsibility, as well as the desirability of embedding values in web service design.

Suggested further reading

(not compulsory)

  • Floridi, L. (2010). Information: A very short introduction. Oxford: Oxford University Press.
  • Verbeek P.-P. (2005). What things do: Philosophical reflections on technology, agency, and design. University Park, PA: Pennsylvania State University Press.

Didactic Concept, Schedule and Assignments

In order to be able to reason about ethical consequences of web service design, knowledge is needed about ethical theories, as well as the skills to reason based on this knowledge in a sound way. The acquisition of this knowledge and these skills will be achieved via literature, online interactive lectures and assignments. Before each online lecture, students are expected to read the required texts, and have their answers to the assignments ready for the online lectures. The assignments will then be discussed in the lecture in relation to the theory. Lectures are not intended to summarise the reading material, so without having read the material before the lecture, you will probably not be able to understand the discussions.

You are expected to participate actively in the lectures. If you choose not to participate in the assignments and discussion, you can still participate in the exam, but you will have to answer an essay question at the exam instead. Experience shows that without prior experience or practice with ethical reasoning, such a task can be expected to be difficult.

Please be aware that, apart from sound application of the theory and sound use of the concepts provided, there is often no right or wrong answer to an ethical question. The essential requirement is that you back up your ideas with proper arguments, and that you put sufficient effort into identifying potential desirable and undesirable effects of technological designs. If you think, for example, that censorship on the Internet is a good thing, do not hesitate to write it down, but make sure you use the tools offered in this course to justify your viewpoint. Your grade will never be lowered merely because you disagree with the lecturer!


Assignment #2 has to be submitted by e-mail 1 week before the lecture (see schedule). It should be 1 to 2 pages A4 (500-1000 words, excluding references). If you exceed this length, the remaining part of the text may not be considered in the grading.

Grading criteria for assignments:

  • Adequacy of selection and use of course material / theories
  • Adequacy of selection of examples and link to theory
  • Quality of the identification of ethical problems
  • Quality of argumentation and analysis
  • Quality of the text as a whole

Each text will be discussed in an online Writers's Workshop. You can further improve the text after the online discussion.

Assignments topic 1:

  1. Read the literature associated with topic 1 (see Structure of the Course - Ethics of Technology).
  2. Identify the most important differences between the two approaches (technological mediation and choice architecture). Illustrate this by applying the two approaches to a product of your choice (e.g. cell phone, car, vacuum cleaner, …) (Note: simply summarizing and illustrating the approaches is not a sufficient answer to this question!)
  3. In these approaches, what would be different for web services (e.g. Facebook, YouTube, Twitter, weather radar, …) compared to products? You may want to use the differences between products and services as discussed on page 2-3 of (section 2, you don't need to read the whole text). (Note: you need to link your answer to the two approaches!)

Assignments topic 2:

  1. Read the literature associated with topic 2 (see Structure of the Course - Specific features of information).
  2. How can the privacy problem, as described by Floridi, be understood as a power issue? Which types of power as distinguished by Brey are relevant here, and how?
  3. How, in your opinion, does trust play a role here? Who would you need to trust in order to be convinced that a web service protects your privacy? How is this related to power?
  4. How could you use the technological mediation and/or choice architecture approach to improve the ethical impact on power relations in an example (e.g. a social networking service)?

Assignments topic 3:

  1. Read the literature associated with topic 3 (see Structure of the Course - Professional responsibility).
  2. Choose a case from below, or propose one yourself (discuss with the lecturer):
    • Internet voting system
    • reputation management on eBay
    • deep packet inspection of Internet traffic
    • artificial intelligence, autonomous web agents
    • a peer-to-peer file sharing system (such as Napster)
    • a social networking service (such as Facebook)
  3. Assume you are participating in developing the chosen web service. What would be (I) the responsibilities you have according to professional responsibility, and (II) additional responsibilities you can identify based on what you have learnt in this course, as well as your own moral standards?
  4. Apply the steps of value-sensitive design (section 6) to your case. Which additional issues can you identify? How useful is value-sensitive design as a step-wise approach to deal with your responsibilities?


The examination will consist of a written exam in the final weekend, which is part of the module exam. The computer ethics part will consist of short questions and an essay question. You will have the option to have the results of your written text of assignment #2 included in the grading procedure. In that case, you do not need to answer the essay question at the exam.

The short questions of the exam will determine 50% of the grade. The other 50% will be determined either by assignment #2 or by the essay question of the exam (the better text counts).

Past Course Pages