• Due date: 28th October 2022, 23:59
  • Mark weighting: 25%
  • Submission: Submit your assignment through GitLab
  • Policies: For late policies, plagiarism policies, etc., see the policies page


  • This is an individual effort assignment.
  • We provide an automatic extension of four days (without request). No further extensions are possible.


A Web proxy is a program that acts as a middleman between a Web browser and a server. Instead of contacting the end server directly to get a Web page, the browser contacts the proxy, which forwards the request to the end server. When the end server replies to the proxy, the proxy sends the reply on to the browser.

Proxies are useful for many purposes. Sometimes proxies are used in firewalls, so that browsers behind a firewall can only contact a server beyond the firewall via the proxy. Proxies can also act as anonymizers: by stripping requests of all identifying information, a proxy can make the browser anonymous to Web servers. Proxies can even be used to cache web objects by storing local copies of objects from servers then responding to future requests by reading them out of its cache rather than by communicating again with remote servers.

In this assignment, you will write a simple HTTP proxy that caches web objects. For the first part of the assignment, you will set up the proxy to accept incoming connections, read and parse requests, forward requests to web servers, read the servers’ responses, and forward those responses to the corresponding clients. This first part will involve learning about basic HTTP operation and how to use sockets to write programs that communicate over network connections. In the second part, you will upgrade your proxy to deal with multiple concurrent connections. This will introduce you to dealing with concurrency, a crucial systems concept. In the third and last part, you will add caching to your proxy using a simple main memory cache of recently accessed web content.


You will implement a proxy for handling HTTP/1.0 GET requests. Your proxy should be able to handle multiple concurrent clients. Finally, your proxy should store recently-used Web objects in memory.

We leave it to you to plan your solution. For clarity, below is a proposed workflow:

  • Implement a sequential proxy reusing code from prior labs (sequential proxy)
  • Enable the proxy to handle multiple client simultaneously (concurrency proxy)
  • Implement and add a cache to the concurrency proxy (concurrent proxy + caching)
  • Fix and optimize all the concurrency/locking issues, e.g., concurrent cache accesses

Strict Requirements#

We strictly enforce the following requirements:

  • Implement a prethreaded concurrent proxy
  • Use Pthread read/write locks for synchronization

Proxy Interface#

Our driver programs will start your proxy as follows.

./proxy port# replacement_policy 

The first argument is the port number on which this proxy listens for incoming connections. The second argument is a replacement policy (either least-recently-used or least-frequently-used).


We will grade your assignment based on the proxy and cache code in the following files:

  • cache.h and cache.c
  • proxy.c

We provide a Makefile that should “just” work. If required, you can modify the Makefile. However, we will not fix your Makefile for compilation issues.

We expect the delivered code to be well-documented.

Starting the Assignment#

This is an individual assignment. We advise you finish the networking and concurrency labs before attempting this assignment.

The first step is implementing a basic sequential proxy that handles HTTP/1.0 GET requests. Other requests type, such as POST, are strictly optional.

When started, your proxy should listen for incoming connections on a port whose number will be specified on the command line. Once a connection is established, your proxy should read the entirety of the request from the client and parse the request. It should determine whether the client has sent a valid HTTP request; if so, it can then establish its own connection to the appropriate web server then request the object the client specified. Finally, your proxy should read the server’s response and forward it to the client.

HTTP/1.0 GET requests#

When an end user enters a URL such as http://www.example.com/index.html into the address bar of a web browser, the browser will send an HTTP request to the proxy that begins with a line that might resemble the following:

    GET http://www.example.com/index.html HTTP/1.1

In that case, the proxy should parse the request into at least the following fields: the hostname, www.example.com; and the path or query and everything following it, /index.html. That way, the proxy can determine that it should open a connection to www.example.com and send an HTTP request of its own starting with a line of the following form:

    GET /index.html HTTP/1.0

Note that all lines in an HTTP request end with a carriage return, followed by a newline. Also important is that every HTTP request is terminated by an empty line.

You should notice in the above example that the web browser’s request line ends with HTTP/1.1, while the proxy’s request line ends with HTTP/1.0. Modern web browsers will generate HTTP/1.1 requests, but your proxy should handle them and forward them as HTTP/1.0 requests.

It is important to consider that HTTP requests, even just the subset of HTTP/1.0 GET requests, can be incredibly complicated. The lectures describe certain details of HTTP GET transactions. You should limit your implementation to those details only.

If interested, you can refer to RFC 1945 for the complete HTTP/1.0 specification.

Request headers#

The important request headers for this assignment are the Host, User-Agent, Connection, and Proxy-Connection headers:

  • Always send a Host header. While this behavior is technically not sanctioned by the HTTP/1.0 specification, it is necessary to coax sensible responses out of certain Web servers, especially those that use virtual hosting.

The Host header describes the hostname of the end server. For example, to access http://www.example.com/index.html, your proxy would send the following header:

    Host: www.example.com

It is possible that web browsers will attach their own Host headers to their HTTP requests. If that is the case, your proxy should use the same Host header as the browser.

  • You may choose to always send the following User-Agent header:
    User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ Safari/537.36

Your proxy should send the header as a single line.

The User-Agent header identifies the client (in terms of parameters such as the operating system and browser), and web servers often use the identifying information to manipulate the content they serve. Sending this particular User-Agent: string may improve, in content and diversity, the material that you get back during simple telnet-style testing.

  • Always send the following Connection header:
    Connection: close
  • Always send the following Proxy-Connection header:
    Proxy-Connection: close

The Connection and Proxy-Connection headers are used to specify whether a connection will be kept alive after the first request/response exchange is completed. It is perfectly acceptable (and suggested) to have your proxy open a new connection for each request. Specifying close as the value of these headers alerts web servers that your proxy intends to close connections after the first request/response exchange.

For your convenience, the values of the described User-Agent header is provided to you as a string constant in proxy.c.

Finally, if a browser sends any additional request headers as part of an HTTP request, your proxy should forward them unchanged.

Port Numbers#

There are two significant classes of port numbers for this assignment: HTTP request ports and your proxy’s listening port.

The HTTP request port is an optional field in the URL of an HTTP request. That is, the URL may be of the form, http://www.example.com:8080/index.html, in which case your proxy should connect to the host www.example.com on port 8080 instead of the default HTTP port, which is port 80. Your proxy must properly function whether or not the port number is included in the URL.

The listening port is the port on which your proxy should listen for incoming connections. Your proxy should accept a command line argument specifying the listening port number for your proxy. For example, with the following command, your proxy should listen for connections on port 15213:

    linux> ./proxy 15213 LRU

You may select any non-privileged listening port (greater than 1,024 and less than 65,536) as long as it is not used by other processes. Since each proxy must use a unique listening port and many people will simultaneously be working on each machine, the script port-for-user.pl is provided to help you pick your own personal port number. Use it to generate port number based on your user ID:

    linux> ./port-for-user.pl jack
    droh: 45806

The port, p, returned by port-for-user.pl is always an even number. So if you need an additional port number, say for the Tiny server, you can safely use ports p and p+1.

Please don’t pick your own random port. If you do, you run the risk of interfering with another user.

Dealing with multiple concurrent requests#

Once you have a working sequential proxy, you should alter it to simultaneously handle multiple requests. The simplest way to implement a concurrent server is to spawn a new thread to handle each new connection request. Other designs are also possible, such as the prethreaded server described in the lectures.

  • Note that your threads should run in detached mode to avoid memory leaks.

  • The open_clientfd and open_listenfd functions described in the lectures (and included in csapp.h) are based on the modern and protocol-independent getaddrinfo function, and thus are thread safe.

Caching web objects#

For the final part of the assignment, you will add a cache to your proxy that stores recently-used Web objects in memory. HTTP actually defines a fairly complex model by which web servers can give instructions as to how the objects they serve should be cached and clients can specify how caches should be used on their behalf. However, your proxy will adopt a simplified approach.

When your proxy receives a web object from a server, it should cache it in memory as it transmits the object to the client. If another client requests the same object from the same server, your proxy need not reconnect to the server; it can simply resend the cached object.

Obviously, if your proxy were to cache every object that is ever requested, it would require an unlimited amount of memory. Moreover, because some web objects are larger than others, it might be the case that one giant object will consume the entire cache, preventing other objects from being cached at all. To avoid those problems, your proxy should have both a maximum cache size and a maximum cache object size.

Maximum cache size#

The entirety of your proxy’s cache should have the following maximum size:


When calculating the size of its cache, your proxy must only count bytes used to store the actual web objects; any extraneous bytes, including metadata, should be ignored.

Maximum object size#

Your proxy should only cache web objects that do not exceed the following maximum size:


For your convenience, both size limits are provided as macros in proxy.c.

The easiest way to implement a correct cache is to allocate a buffer for each active connection and accumulate data as it is received from the server. If the size of the buffer ever exceeds the maximum object size, the buffer can be discarded. If the entirety of the web server’s response is read before the maximum object size is exceeded, then the object can be cached. Using this scheme, the maximum amount of data your proxy will ever use for web objects is the following, where T is the maximum number of active connections:


Eviction policies#

Your proxy’s cache should employ two eviction policies.

  • The first one approximates a least-recently-used (LRU) eviction policy. It doesn’t have to be strictly LRU, but it should be something reasonably close. Note that both reading an object and writing it count as using the object.
  • The second one is least-frequently-used (LFU) policy.

We will test your cache implementation using both policies.


Accesses to the cache must be thread-safe, and ensuring that cache access is free of race conditions will likely be the more interesting aspect of this part of the assignment. As a matter of fact, there is a special requirement that multiple threads must be able to simultaneously read from the cache. Of course, only one thread should be permitted to write to the cache at a time, but that restriction must not exist for readers. As such, protecting accesses to the cache with one large exclusive lock is not an acceptable solution. You may want to explore options such as partitioning the cache, and of course, using Pthreads readers-writers locks. Note that in the case of LRU, the fact that you don’t have to implement a strictly LRU eviction policy will give you some flexibility in supporting multiple readers.


This assignment will be graded out of a total of 10 points:

  • Basic Correctness: 4 points for basic proxy operation
  • Concurrency: 3 points for handling concurrent requests
  • Cache: 3 points for a working cache

Basic Correctness#

  • Correct handling of GET requests, including proper identification of host, parsing of request and headers, and forwarding of requests to the cache or the server
  • Proper error handling (e.g., in the case connection with the server cannot be established)
  • Proxy must not crash due to malformed requests
  • No client connections are dropped due to bugs in the code
  • No memory leaks


  • Proper implementation of thread pooling
  • Optimized use of reader/writer locks for synchronization whenever necessary
  • No races or deadlocks


  • If the data is cached, then the request must be handled by the proxy
  • Proper implementation of replacement policies (LRU and LFU)
  • Respect caching constraints (e.g., maximum cacheable object size)


As always, you must deliver a program that is robust to errors and even malformed or malicious input. Servers are typically long-running processes, and web proxies are no exception. Think carefully about how long-running processes should react to different types of errors. For many kinds of errors, it is certainly inappropriate for your proxy to immediately exit.

Robustness implies other requirements as well, including invulnerability to error cases like segmentation faults and a lack of memory leaks and file descriptor leaks.

Testing and debugging#

You will not have any sample inputs or a test program to test your implementation. You will have to come up with your own tests and perhaps even your own testing harness to help you debug your code and decide when you have a correct implementation. This is a valuable skill in the real world, where exact operating conditions are rarely known and reference solutions are often unavailable.

Fortunately there are many tools you can use to debug and test your proxy. Be sure to exercise all code paths and test a representative set of inputs, including base cases, typical cases, and edge cases.

Tiny web server#

Your handout directory contains the source code for the Tiny web server. The Tiny web server will be easy for you to modify as you see fit. It’s also a reasonable starting point for your proxy code.


As described in your textbook (11.5.3), you can use telnet to open a connection to your proxy and send it HTTP requests.


You can use curl to generate HTTP requests to any server, including your own proxy. It is an extremely useful debugging tool. For example, if your proxy and Tiny are both running on the local machine, Tiny is listening on port 15213, and proxy is listening on port 15214, then you can request a page from Tiny via your proxy using the following curl command:

linux> curl -v --proxy http://localhost:15214 http://localhost:15213/home.html
* About to connect() to proxy localhost port 15214 (#0)
*   Trying connected
* Connected to localhost ( port 15214 (#0)
> GET http://localhost:15213/home.html HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu)...
> Host: localhost:15213
> Accept: */*
> Proxy-Connection: Keep-Alive
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Server: Tiny Web Server
< Content-length: 120
< Content-type: text/html
<img align="middle" src="godzilla.gif">
Dave O'Hallaron
* Closing connection #0

Coding and Implementation Hints#

  • Fork the Assignment 2 repo and then clone it locally. We provide some template files for you to start the assignment.

  • Use the functions in csapp.h as advised throughout the networking and concurrency lectures.

  • As discussed in the lectures, using standard I/O functions for socket input and output is a problem. Instead, we recommend that you use the Robust I/O (RIO) package, which is provided in the csapp.c file in the handout directory.

  • The error-handling functions provide in csapp.c are not appropriate for your proxy because once a server begins accepting connections, it is not supposed to terminate. You’ll need to modify them or write your own.

  • Add all your code to cache.c and proxy.c and the corresponding header files. You are free to modify these files in the handout directory any way you like.

  • Sometimes, calling read to receive bytes from a socket that has been prematurely closed will cause read to return -1 with errno set to ECONNRESET. Your proxy should not terminate due to this error either.

  • Remember that not all content on the web is ASCII text. Much of the content on the web is binary data, such as images and video. Ensure that you account for binary data when selecting and using functions for network I/O.

  • Forward all requests as HTTP/1.0 even if the original request was HTTP/1.1.

Submitting your work#

You should submit your solution through Gitlab by pushing changes to your fork of the assignment repository. A marker account should automatically have been added to your fork of the assignment.

We recommend maintaining good git hygiene by having descriptive commit messages and committing and pushing your work regularly. We will not accept late submissions.

bars search times arrow-up