The SOS file system features a flat directory structure,
containing files and the terminal device
console
. Files have attributes (like size, creation
time, permissions) that can be accessed through appropriate
systems calls. All I/O primitives provided by SOS are
synchronous (blocking).
You will need to maintain file descriptors containing a present read position in the file. This requires some minimal process descriptor block (PCB) data structure. Again, don't worry at this stage about the exact final contents of the PCB, you can extend them later on. You may wish to initially only deal with a single client process, data on which is kept in a single PCB variable. This avoids dealing with process IDs for the time being.
The low-level file system is implemented by the NFS file system on your host machine. You are provided with an NFS client library for accessing files on your host Linux machine.
Before you can use the NFS library you need to set up a network card driver. This requires you to start libgt (the PCI driver) and libtulip (the network card driver). hardware.c that takes care of the details for you. If you are interested in the exact details involved in initialising the network driver you should read the provided code.
NOTE: The original released hardware.c did not perform PCI scans. Use the patches here to correct this!
In the code you are provided with a sos/network.c
file, which provides network_init()
. This should be
called after the hardware_init()
function has
executed. Ensure this is called in a thread different from your
interrupt handler or else you will encounter
deadlock.
Apart from the initialisation and mounting filesystems, the NFS library provides an asynchronous interface, which means when you call the function, you must also provide a callback function which will be called when the transaction is completed. This should make it easy to provide a blocking interface each clients, without blocking the whole system.
The NFS server you will be talking to is your host machine. The
filesytem you mount is the /tftpboot
.
One of the major issues you will need to deal with in this
milestone is converting an asynchronous interface (as provided by
libnfs
) into a synchronous interface (as required by
the system call interface).
If you are planning on doing the file system caching, or dynamic file system advanced component you should probably consider them in your design now.
You must have some test code to demonstrates that you have
implemented the open, close, read, write, getdirent
and stat
system calls. I would suggest adding a
cp
command to sosh
, this will test both
reading and writing a file. A cp
command, together with the supplied ls
command, should
be sufficient to show that your implementation works.
As always you should be able to explain any design decisions you made, specifically, the design of your open file table and how you dealt with the asynchronous/synchronous problem.