[CSE]  Advanced Operating Systems 
COMP9242 2017/S2 
UNSW
CRICOS Provider
Number: 00098G

PRINTER Printer-Friendly Version

M5: File System

Milestone overview

In this milestone, you will:

  • Implement support for open, close, read, write, get_dirent and stat in SOS.
    • File protection (read/write permissions) should be respected.
    • Non-existing files should be created with read+write permissions by default.
  • Benchmark your file system to show read and write performance.

The SOS file system features a flat directory structure, containing files and the terminal device console. Files have attributes (like size, modification time, protection) that can be accessed through appropriate systems calls. All I/O primitives provided by SOS are synchronous (blocking).

You may want to maintain file descriptors containing a present read position in the file. This requires some minimal Process Control Block (PCB) data structure. Again, you will need to modify this structure as you go on, so do not waste too much time on the full details right now. You may wish to initially only deal with a single client process, data on which is kept in a single PCB variable. This avoids dealing with process IDs for the time being.

The low-level file system is implemented by the NFS file system on your host machine. You are provided with an NFS client library for accessing files on your host Linux machine. Once again, it is your responsibility to understand the working of the provided code.

IP stack and NFS

You need to update the 100ms tick you implemented in milestone 1 to call nfs_timeout defined in libs/libnfs/src/nfs.c. This will cause any packets dropped by NFS to be picked up again.

Apart from the initialisation and mounting filesystems, the NFS library provides an asynchronous interface, which means when you call the function, you must also provide a callback function which will be called when the transaction is completed. This should make it easy to provide a blocking interface to clients, without blocking the whole system.

The NFS server you will be talking to is your host machine. The filesytem you mount is /var/tftpboot/$USER where $USER should be replaced by your cse user name.

Design issues

One of the major issues you will need to deal with in this milestone is converting an asynchronous interface (as provided by libnfs) into a synchronous interface (as required by the system call interface).

If you are planning on doing the file system caching, or dynamic file system advanced component you should probably consider them in your design now.

For file descriptor allocation, we recommend implementing the lowest-available policy, i.e., returning the lowest numbered currently unused file descriptor. This will simplify later milestones.

Note: Supporting UNIX/POSIX fork semantics for file descriptors is not required.

Benchmarking

Once you have a your file system up and running, you can use our supplied code to benchmark your filesystem I/O performance. Don't leave this until the last minute as it tends to be more time consuming than you expect (e.g. it finds bugs).

We've extended sosh to measure the achieved bandwidth for reads and writes. This is simply the maximum number of bytes per second you can transfer from the remote host to your user address space, and vice versa.

sosh has a new command benchmark that runs 10 iterations of a write and read bandwidth test for different buffer sizes and outputs the sample is a json file.

We have provided a script parse_results.py that process the json file and created two graphs of the performance characteristics of your SOS implementation (feel free to post the graphs on piazza for comparison). The script also outputs the harmonic mean described in the bonus marking section.

Note: In general, any reported results from benchmarking should be the average of multiple runs (e.g. >= 5, ideally > 9). Standard deviations are expected for any reported results. Our sosh benchmark and script both adhere to this norm.


Assessment

Demonstration

You must demonstrate that you have implemented the open, close, read, write, getdirent and stat system calls. The supplied ls and cp commands in sosh can be used, together with our supplied benchmarking code.

You will need to present the two graphs from our benchmarks to your tutor. The two required graphs show bandwidth for file I/O while varying I/O request size and underlying NFS packet size (for appropriately large I/O requests). You must be able to explain why the graphs look the way they do. You can also show more (pertinent) graphs if you like. The results need to be an average of multiple samples. Standard deviations should be shown if significant.

As always, you should be able to explain any design decisions you made, specifically, the design of your open file table and how you dealt with the asynchronous/synchronous problem.

Bonus marks are available for the group with the highest bandwidth achieved in the benchmark. See the section below.

Show Stoppers

  • Only supporting buffers sizes that don't span a page.
  • Solutions that don't wait for new SOS system calls while I/O is outstanding.

Better Solutions

  • Perform no virtual memory management (mapping) in the system call path (for better performance).
  • Solutions that support multiple concurrent outstanding requests to the NFS server, i.e. attempts to overlap I/O to hide latency and increase throughput.
  • Better solutions only pin in main memory the pages associated with currently active I/O. Not necessarily every page in a large buffer.
  • Avoid double buffering

Performance Bonus Marks

Deadline: The performance bonus is due with milestone 9. You are required to pass milestone 5 as with previous milestones, however you are free to modify your system after passing the milestone to improve performance as the semester progresses.

Classes: There are three bonuses available, one for each of the following classes of systems.

  • Thread-based OS personalities. These are characterised by a SOS design that offloads SOS system call processing onto worker threads for completion. Typically the system call handling thread is not the thread that calls into the NFS library.
  • Event-like OS personality (including co-routines and continuation based systems). These systems are characterised by a single thread (or a small fixed number), where the thread does not block waiting for I/O, and instead preserves current state in a continuation that is used in the future by the main thread or another I/O processes thread.
  • OS-personalities that implement buffer-cache-like strategies to minimise NFS traffic.

Bonus marks: 2 bonus marks are available for the group that achieves the highest I/O throughput in each class. Only one performance bonus is available to each group.

Throughput: for the purpose of this competition, is defined as the harmonic mean of all the sample results from our provided benchmarking code. While you can modify the script for testing, the throughput value for competition purposes is from an unmodified script.

Caveats: We reserve the right to reject submissions in the following situations, and in any situations we view as not in the spirit of the competition. We reserve the right to not award the bonus where we have no suitable submissions.

  • Solutions that do not read or write representative data from the NFS server.
  • Solutions that employ caching strategies that have not written-back the data to the NFS server of close.

Reproducibility: The final performance is based on what we reproduce in testing on our hardware and environment.


Last modified: 04 Aug 2017.