In my Systems Class we’re currently discussing security in relation to system design. When we build reliable systems, we build them in the face of “more-or-less random”, “more-or-less independent” failures and sometimes-unpredictable targeted attacks from adversaries. Adversaries can do many things:
- phishin attacks
- worms, viruses
- personal stolen information
Computer security is different from general security mostly because of the Internet. The rise of the Internet has also brought with itself challenges regarding security. The Internet is cheap, fast and widely-available (relatively speaking), which makes for fast, cheap, scalable attacks on our system. The number of adversaries in the Internet is also huge: almost anyone can be an adversary. The fact that in the Internet you can’t tell a dog apart from a person, also doesn’t help: anonymity of adversaries gives them more leeway to challenge and attack computer systems. Attacks toward computers can also be automated. Another difference in computer security is the potential of an adversary’s resources (botnets). Finally, users have generally poor intuition about protecting themselves, which makes them easy targets of phishing and other forms of attacks, that in the end put an entire system in danger.
Aside from the difficulties as mentioned above – as if they weren’t enough, – it’s just difficult to think about every possible attack scenario, or possible threats facing computers. Achieving that is considered “negative goal”. A negative goal is for example when you say “x can not do something y.” in contrast to a positive goal where you would say “x can do y.” In the positive goal case you can easily check is the goal is met. Not so in the second one.
Another fatality when it comes to securing your system, is well the fact that even one small failure due to an attack can be enough to corrupt the system. However, even knowing failures does not say much about the nature of the attack at times. As a result, a complete security solution does not exist. What we do instead is model systems in the context of security, and assess common risks/attacks.
To create a security model we basically need two things: the goals (or policy) and the assumptions (or threat model). The goals may include privacy (limitation to who can read data), integrity (limitations on who can write data) and availability (ensuring that the service keeps operating). Assumptions, or threat model include plausible assumptions of what we’re protecting against: adversary with unlimited computing power, or adversary with limited computing power. Compromising happens when systems do not have a complete threat model or unrealistic threat model (like assuming the attack comes from an outsider only — it’s not true, sometimes the attack can come from an insider too).
We now consider an example of a security model called the guard model. We think back to client/server models. In client/server model the client makes a request to access some resource on the server. However, there is reason to worry about the security of the server. We would like to secure the resource that is being stored in the server. To attempt to do this, the server needs to check all accesses to the resource (this is called complete mediation). The server, thus puts a guard in place to mediate every request for access.
The guard provides:
- authentication: verifies the identity of the principal, for example checks the client’s username and password
- authorization: verifies whether the principle has access to perform its request on the resource, for example by consulting an access control list for a particular resource.
The guard model applies to lots of places, not just client/server.
Examples (copyright to lecture notes from 6.033):
- UNIX file system:
- client: a process
- server: OS kernel
- resource: file, directories
- client’s requests: read(), write() system calls
- mediation: U/K bit and the system call implementation
- principal: user ID
- authentication: kernel keeps track of a user ID for each process
- authorization: permission bits & owener UID in each file’s inode
- Web server running on UNIX:
- client: HTTP-speaking computer
- server: web application
- resource: wiki pages (?)
- requests: read/write wiki pages
- mediation: server stores data on local dist, accepts only HTTP requests
- principal: username
- authorization: list of usernames that can read/write each wiki
- Firewall = a system that acts as a barrier between a presumbly secure, internal network and the outsde world. It keeps untrusted computers from accessing the network.
- client: any computer sending packets
- server: the entire internal network
- resource: internal servers
- requests: packets
- internal network must not be connected to Internet in other ways
- no open wifi access point on internal network for adversary to use
- no internal computers that might be under control of adversary
- principal: none
- authentication: none
- authorization: check for IP address & port in table of allowed connections
What can possibly go wrong?
- Complete mediation can be bypassed due to software bugs or an adversary
- how to prevent this? can reduce complexity (the area to cover with the guard)
- The principle of least-privilege which limits the privileged or trusted components
- Policy vs. mechanism: high level policy is ideal, clear and concise. Security mechanisms (like for example, guards) provide lower-level guarantees.
- Users make mistakes!!!
- Users may be unwilling to pay cost of security mechanism.