Host: Bristol Mathematics Lecturer: Dr Daniel Lawson

Data Science Toolbox

Coursebook index Timetable Home

Jump to Block: (About) 01 02 03 04 05 06 07 08 09 10 11 12 (Assessments)

Assessment Overview

There are two types of assessments:

The Assessments are released in the appropriate Block of the timetable but are also linked here (links are only active when the Block is released):

The Portfolio is assessed on each block from 2-11. Block 1 is marked similarly but is formative, i.e. does not contribute to your mark. The deadline is the start of TB2. In each block you will do two activities:

  1. Worksheets: Multiple choice questions submitted via Noteable (log in via Blackboard). These should be straightforward, either direct from your notes or with very simple experiments you can conduct as extensions of the Workshop. These are worth 20% of the Portfolio mark.
  2. Reflection: Long-form reflective questions that should require a deeper understanding of the course material and may require you to undertake further reading or experimentation. These are worth 80% of the Portfolio mark.

You may take the multiple-choice component at any time and it is recommended that you do this when you work through the Workshop content. The long-form content is submitted at the end of the course, and you are recommended to make a first draft/note form attempt when you first see the content, and reflect back on it in a finessing stage during the examination preparation time (in lieu of an exam).

Length and format of long-form portfolio

Your Portfolio should give a one-page answer to the question of each Block. Therefore the whole Portfolio is only 10 pages long. However:

Guidance on Group Assessments

There is a complete Example Assessment.

Undertaking a group project online is a difficult process that requires care and planning. Help for planning your project is given in Block 1, and includes:

The individual assessment instructions has significant guidance. These extra thoughts are less directly relevant but give context.

Comment on Markdown reflections:

The PDF versions of the example reflections are created using Pandoc and it is trivial:

pandoc -o RachelR_Reflection.pdf RachelR_Reflection.md 

Markdown is an acceptable format, though PDF looks nicer. Referencing is important but don’t overdo it; you might use footers [^ref1], or just place simple labels without worrying about Markdown format at all (label2).

[^ref1]: Lawson D, An Example Reference, 2020.

(label2): Lawson D, A Second Example Reference without Markup, 2020.

Comment on Report formats:

It is completely fine to present a well commented Rmd or ipynb file. You are welcome to try to generate a beautiful PDF in which all of the results are knitted together, but it can be awkward if content is fundamentally separated. Yes, you can create a PDF from each file and merge the PDF, and doing so once is educational, but it isn’t the point of DST.

Please commit your final output. It is generally considered bad practice to commit transient content to your repository. This would include the Jupyter Notebook with all of the content competed, and the html output of Rmd. However, for the purposes of generating a one-off assessed report, it is safest to do this, though best only for your final commit.

This is because it is possible that I cannot run your code, for a good reason or a bad, and therefore I want to see what the output should be.

Why is transient content bad? You repository will get bigger and take longer to process as the whole history of everything that you’ve generated is stored. Text files compress very nicely for this content, but binary objects such as images and data, hidden inside html or ipynb files, compress badly.

Comment on data:

Don’t commit very large datasets to GitHub, and don’t commit modestly large ones unless necessary (and try not to duplicate them). There are file size limits, but it is inefficient. Try to use a different data sharing solution, such as OneDrive, for such data.