Morality, technology and human nature

“(…) scientific and technical work routinely implicates politics. (…) Technological ideas and technological things are not politically neutral: routinely, they have strong, built-in tendencies.”

Isn’t it fascinating that even when we think we’ve escaped things like “politics”, “power struggles”, we haven’t really? The reason I liked science for so long, the reason I wanted to bury my face and head in it, was so I didn’t have to deal with the very imperfect human world that is shaped and pushed back and forth by human vice: pride, greed, envy to pure destructive desires. Imagine my surprise when I discovered, heck, these bad things are everywhere. Even in the idealist and vice-fighter myself!

Not only are these found in all humans, they can also permeate everything we do, be it science, technology or philosophy. That was a sad realization for me, really.

From my earliest days I had a passion for science. But science, the exercise of the supreme power of the human intellect, was always linked in my mind with benefit to people. I saw science as being in harmony with humanity. I did not imagine that the second half of my life would be spent on efforts to avert a mortal danger to humanity created by science. (Rotblat, Nobel Peace Prize speech)

As I conclude with this argument, I want to get back to the first quote of “strong, built-in tendencies”. It is theses tendencies we have, that we transmit to our inventions, our ideologies, our thoughts, our actions. Even our science and technology. It convinces me more and more. We have a great affect on the things we do as broken people.

It convinces me in a way, though this might be somewhat of a leap, of the nature of science and technological advances: a nature that is not objective, but highly subjective and with dubious intentions behind it.

Anyways, the main reason I started even talking about this is because of a paper I had to read. Funny story about my encounter with this paper: I saved it in my to-read list during IAP/winter holiday (it was sent out to my school’s CS lab mailing list). As life got busy I did not manage to read it. Then as I take two classes this semester, they both require me to read this paper. Of course, it was a win-win moment for me 😀

The paper I’m quoting is this fascinating one from Phillip Rogaway: The Moral Character of Cryptographic Work.  You can find the link for it here.

More about the paper: It has some great advice on how as a cryptographer one should view his work. Less of being only interested in the technical work and more awareness in the ethics and effects your work has. Which is a great lessor for all of us.

 

 

Security in my Computer Systems Engineering Class

Part I

download

In my Systems Class we’re currently discussing security in relation to system design. When we build reliable systems, we build them in the face of “more-or-less random”, “more-or-less independent” failures and sometimes-unpredictable targeted attacks from adversaries. Adversaries can do many things:

  • phishin attacks
  • botnets
  • worms, viruses
  • personal stolen information

it-350x266

Computer security is different from general security mostly because of the Internet. The rise of the Internet has also brought with itself challenges regarding security. The Internet is cheap, fast and widely-available (relatively speaking), which makes for fast, cheap, scalable attacks on our system. The number of adversaries in the Internet is also huge: almost anyone can be an adversary. The fact that in the Internet you can’t tell a dog apart from a person, also doesn’t help: anonymity of adversaries gives them more leeway to challenge and attack computer systems. Attacks toward computers can also be automated. Another difference in computer security is the potential of an adversary’s resources (botnets). Finally, users have generally poor intuition about protecting themselves, which makes them easy targets of phishing and other forms of attacks, that in the end put an entire system in danger.

Aside from the difficulties as mentioned above – as if they weren’t enough, – it’s just difficult to think about every possible attack scenario, or possible threats facing computers. Achieving that is considered “negative goal”. A negative goal is for example when you say “x can not do something y.” in contrast to a positive goal where you would say “x can do y.” In the positive goal case you can easily check is the goal is met. Not so in the second one.

Another fatality when it comes to securing your system, is well the fact that even one small failure due to an attack can be enough to corrupt the system. However, even knowing failures does not say much about the nature of the attack at times. As a result, a complete security solution does not exist. What we do instead is model systems in the context of security, and assess common risks/attacks.

To create a security model we basically need two things: the goals (or policy) and the assumptions (or threat model). The goals may include privacy (limitation to who can read data), integrity (limitations on who can write data) and availability (ensuring that the service keeps operating). Assumptions, or threat model include plausible assumptions of what we’re protecting against: adversary with unlimited computing power, or adversary with limited computing power. Compromising happens when systems do not have a complete threat model or unrealistic threat model (like assuming the attack comes from an outsider only — it’s not true, sometimes the attack can come from an insider too).

 

Part II

We now consider an example of a security model called the guard model. We think back to client/server models. In client/server model the client makes a request to access some resource on the server. However, there is reason to worry about the security of the server. We would like to secure the resource that is being stored in the server. To attempt to do this, the server needs to check all accesses to the resource (this is called complete mediation). The server, thus puts a guard in place to mediate every request for access.

The guard provides:

  • authentication: verifies the identity of the principal, for example checks the client’s username and password
  • authorization: verifies whether the principle has access to perform its request on the resource, for example by consulting an access control list for a particular resource.

The guard model applies to lots of places, not just client/server.

download (1)

Examples (copyright to lecture notes from 6.033):

  1. UNIX file system:
    1. client: a process
    2. server: OS kernel
    3. resource: file, directories
    4. client’s requests: read(), write() system calls
    5. mediation: U/K bit and the system call implementation
    6. principal: user ID
    7. authentication: kernel keeps track of a user ID for each process
    8. authorization: permission bits & owener UID in each file’s inode
  2. Web server running on UNIX:
    1. client: HTTP-speaking computer
    2. server: web application
    3. resource: wiki pages (?)
    4. requests: read/write wiki pages
    5. mediation: server stores data on local dist, accepts only HTTP requests
    6. principal: username
    7. authorization: list of usernames that can read/write each wiki
  3. Firewall = a system that acts as a barrier between a presumbly secure, internal network and the outsde world. It keeps untrusted computers from accessing the network.
    1. client: any computer sending packets
    2. server: the entire internal network
    3. resource: internal servers
    4. requests: packets
    5. mediation:
      1. internal network must not be connected to Internet in other ways
      2. no open wifi access point on internal network for adversary to use
      3. no internal computers that might be under control of adversary
    6. principal: none
    7. authentication: none
    8. authorization: check for IP address & port in table of allowed connections

What can possibly go wrong?

  1. Complete mediation can be bypassed due to software bugs or an adversary
    1. how to prevent this? can reduce complexity (the area to cover with the guard)
    2. The principle of least-privilege which limits the privileged or trusted components
  2.  Policy vs. mechanism: high level policy is ideal, clear and concise. Security mechanisms (like for example, guards) provide lower-level guarantees. :/
  3. Users make mistakes!!!
  4. Users may be unwilling to pay cost of security mechanism.

 

images