Text in red colour was added/modified after the
- This is a 24h take-home exam.
- The exam runs from 12 noon on Tuesday, 6 November to
12 noon on Wednesday, 7 November
- The basic exam question is available now, but the papers you
are asked to analyse will be made available only at 12 noon on Tuesday, 6 November.
- The papers will be available electronically via this WWW page.
Email Gernot if you need hardcopies so we can try to
- The total exam is worth 35 marks.
- You will lose 3.5 marks for each hour, or part thereof, your
submission is late.
- You are not to get any help from anyone on the exam. You should
not talk to anyone else about the exam from the time you receive
the full details until you submit your solution.
- You have the choice of three different ways to submit your
- Submit a hardcopy by the deadline (12 noon on Wednesday, 7 November). It must be accompanied by the signed
certification of sole
authorship. It must be submitted to Gernot Heiser,
Kevin Elphinstone, Aaron Carroll,
or Anna Lyons in person. In
addition, you must submit an electronic version (PDF) within three
days of the end of the exam.
- Submit an electronic copy (PDF), via the give system, by the
deadline, and submit a hardcopy, including the signed certification of sole authorship,
within three days of the end of the exam.
- Submit, via the give system, by the deadline, a digitally
signed electronic copy (PDF). This must contain your full name,
student number, date, and the following declaration:
I hereby declare that this submission is my own work, and I have
not received any help whatsoever.
The file must be signed with PGP/GPG, be ASCII-armoured, and have an
extension .pdf.asc. This is
achieved (when using PGP 6.0 or later) with the command:
pgp -sa <file>
or, using GPG, with the command
gpg -sa <file>
See the GPG Example for how to use
GPG for signing and check that you've got it right.
Notes on electronic submissions
- In the cases where a paper submission is also required (cases 1 and
2 above), the electronic submission, when printed on a CSE printer, must
appear exactly as the submitted hardcopy.
- The submission must be in PDF format (and use the extension
- The submission must be made via the give system, mark
- If using give, check you can submit your exam (or any
exam report) well before the deadline. We will have little sympathy for
submission issues if you raise them five minutes before the deadline.
- Make sure that you only use Type-1 fonts, as others are
unprintable on some printers. LaTeX users can ensure the use of Type-1
fonts by producing a PostScript file with the command
dvips -Pwww -o file.ps file
and then converting this to PDF.
Notes on digitally signed submissions
Digitally signing your submission only makes sense if we can verify your
signature. I therefore require you to have your signature signed by
Gernot or Kevin beforehand, or within three days of the end of the
exam. Therefore, if you want to use the digital signature option, do the
- Familiarise yourself with PGP (or GPG). We will not
provide tutorials on this, it's up to you. Get yourself a public key
if you don't have one. Follow the recommended safeguards to keep
it secure. It will be like your normal signature!
- See Gernot or Kevin with your key and proof of identity. He
will then sign your key.
- This signed key can then be used to sign your exam. (Feel
free to get others to sign your key as well.) If you get your key
signed after the exam, make sure that it is the same key as used for
signing the exam.
- Make sure that PGP or GPG is installed on the system you are
going to use to write your exam, and that you can use it reliably. If
you stuff up, it's your own problem.
You are given two research papers (the links will be active from
12 noon on Tuesday, 6 November):
- Paper 1 J-H Ding, C-J Lin, P-H Chang, C-H Tsang, W-C Hsu and Y-C Chung: “ARMvisor: System Virtualization for ARM.” Linux Symposium. 2012.
- Paper 2 K Okamura and Y Oyama: “Controlling the Speed of Virtual Time for Malware Deactivation.” 3rd Asia-Pacific Workshop on Systems (APSys). 2012.
You are to read, understand, and critically assess the papers. Questions
you may want to ask yourself for each of the papers:
These are only hints, I am not asking you to explicitly answer all these
for each paper. However, you may find those questions helpful in
critically analysing the papers. Imagine you are a reviewer for a
conference to which the papers have been submitted, and you are to judge
their contribution to the field.
- What problem is it trying to address?
- How well does it address the issue?
- How well do they motivate the value of solving the problem. Is
the problem a real problem? Is the problem/work significant?
- How does it relate to other work? Does
it reference relevant other work (as far as you can tell), does it
do the other work justice? Has it all been done before?
- How technically sound is it? Does their argumentation, the
presented data convince you? Should they have been looking at other
- How good are the results?
- How good/deep is their analysis?
- How easy would it be to reproduce their results?
- How general are their results? Can they be applied to other
systems? Did we learn some general truth?
Note that all papers are in fact published (and should therefore
meet certain quality standards one hopes :-). In order to get an idea of what program committees at
top systems conferences are looking for, have a look at this classic!
What to submit
You are to submit a report which summarises for each paper the basic
ideas behind their work. You are to give a critique of the technical
merits, achievements and shortcomings (if any). The papers are not
directly related, so you don't have to compare them.
I am intentionally not specifying a length limit. However, I
strongly encourage you to be concise. Lengthy submissions will almost
certainly be unfocussed and waffly. I cannot imagine a decent job in
excess of 3000 words, and a very good submission
should be possible to be written up in 2–3 pages. If your report gets longer than
this you should step back and try to focus.
A good way to structure your review is the standard approach taken
by conference program committees, which tend to use some variant of a
basic structure which has the following sections:
Summary of the paper. This is about three paragraphs summarising
what the paper i trying to achieve and how it goes about it, and
how relevant the work is. Given that you're trying to convince me
that you got it, you may go into a bit more detail than the
typical reviewer would (who is selected as an expert in the field
and doesn't have to prove themselves). So you may want to write up
to a page here (but be concise!)
Also, this section should not focus on
criticising the paper (although with a bad paper I frequently
find it difficult to state what they are doing without noting
that it is wrong...)
Pros: What you like about the paper (list of bullet points or
Cons: What you don't like about the paper.
Criticism of the work. Things which are wrong, insufficient,
could be improved. But also detailed discussion of the
strengths. If you were a reviewer whose job is to recommend
whether the paper should be accepted or rejected, this is where
you make your case. But even if you generally like the paper,
discuss its shortcomings, and even if you would reject it, discuss
its redeeming features. This is the most important part of the review.
Minor issues that should be fixed (typos, grammar, etc). These are
part of a formal paper review, but not really relevant to this
exercise, so omit (unless something really irks you ;-).
Questions to the authors. Many conferences have a rebuttal period,
where authors get the chance to comment on the reviews before the
decision is made. If you think there is something the authors
could clarify to help you make a decision, this is the
place. Again, given that the paper is already published, this
section is obviously optional.
Points that must be addressed if accepted. Many conferences use a
shepherding process, where accepted papers are assigned a shepherd
who supervises the revisions and ensure that the authors follow
the requests of the reviewers. Again, in your case it's too late,
but if you think that there are improvements that should
have been made prior to publication, then this is the place to
make your point.
Note: In order to help us to perform an unbiased assessment of
your report, we would appreciate if you do not put your name on
the report itself, only your student ID. Of course, your name
must appear on the certificate that is attached to the report. However,
as long as this certificate is on a separate page, we can assess the
reports without looking at names.
What I will be looking for
You will be marked on the level of understanding and critical analysis
portrayed in you submission. All relative to what can be reasonably
expected from you (I know that none of you have a PhD in OS yet :-)
Here is are (very good) sample solutions, which were done in “real-time” by the students taking the exam (the two reports are from different students):
Sample report on David et
al. (1,900 words).
The student thoroughly (and rightly) demolishes the paper, but still there are a number of other issues which could be mentioned:
- The bulk of the paper is a description of well-known virtualization techniques
- Their use of the ARM MMU is incompetent, they should have read Wiggins and Heiser 2000 (but that wasn't covered in class)
- Their arguments about high trap costs are bogus, and shows that they don't understand the ARM architecture. While on x86 a trap costs hundreds of cycles, on their ARM it's no more than a dozen.
- The argument about closed-source status of OKL4 is also bogus: older versions (from OK Labs and NICTA) are open-source, still available and performing much better than ARMvisor, and there is the maintained ARM version of Fiasco, which is GPL
- Not only do they not benchmark with multiple guests, I'm not even convinced that their system works with multiple guests!
- They commit a classic benchmarking crime by only benchmarking against themselves!
- Sample report on Okamura and Oyama. (780 words).
The student identified pretty much all the relevant shortcomings of the paper, with the sole exception of not mentioning that there was no examination of overheads. While this is a workshop paper, were no thorough analysis is expected, I'd at least expect to see an idea of cost.
Note: this is an exam, not betting on horses (even if it's held on
Melbourne Cup day ;-). It is dangerous to guess what I might
think of the paper, or to guess that there'll be a good and a bad
one. Papers are selected on other criteria.
You may find it useful to look at the
or 1999 exams,
and the sample reports provided there.
03 Jan 2013.