CM2101 Human Computer Interaction\CW2 report代写

Cardiff School of Computer Science and Informatics
Coursework Assessment Pro-forma
Module Code: CM2101
Module Title: Human Computer Interaction
Lecturer: Dr Daniel J. Finnegan
Assessment Title: Human Centred Experiment Design
Assessment Number: 2
Date Set: 17/Mar/2023
Submission Date and Time: by 28/Apr/2023 at 9:30am
Feedback return date: 23/Jun/2023
If you have been granted an extension for Extenuating Circumstances, then the
submission deadline and return date will be 2 weeks later than that stated above.

If you have been granted a deferral for Extenuating Circumstances, then you will be
assessed in the summer resit period (assuming all other constraints are met).
This assignment is worth 20% of the total marks available for this module. If coursework
is submitted late (and where there are no extenuating circumstances):
1 If the assessment is submitted no later than 24 hours after the
deadline, the mark for the assessment will be capped at the minimum
pass mark;
2 If the assessment is submitted more than 24 hours after the deadline, a
mark of 0 will be given for the assessment.
Extensions to the coursework submission date can only be requested using the
Extenuating Circumstances procedure. Only students with approved extenuating
circumstances may use the extenuating circumstances submission deadline. Any
coursework submitted after the initial submission deadline without approved
extenuating circumstances will be treated as late.
More information on the extenuating circumstances procedure can be found on the
Intranet: https://intranet.cardiff.ac.uk/students/study/exams-andassessment/extenuating-circumstances
By submitting this assignment you are accepting the terms of the following declaration:

I hereby declare that my submission (or my contribution to it in the case of group
submissions) is all my own work, that it has not previously been submitted for
assessment and that I have not knowingly allowed it to be copied by another student. I
understand that deceiving or attempting to deceive examiners by passing off the work of
another writer, as one’s own is plagiarism. I also understand that plagiarising another’s
work or knowingly allowing another student to plagiarise from my work is against the
University regulations and that doing so will result in loss of marks and possible
disciplinary proceedings1.
1 https://intranet.cardiff.ac.uk/students/study/exams-and-assessment/academic-integrity/cheating-andacademic-misconduct
Assignment
This coursework is divided into three parts. Part I may be completed as a team or
individually. Parts II and III must be completed individually. Part I requires you to design an
experiment recording three measures relating to a task of your choice for two systems,
hereafter known as ‘The Task’. Part II involves analysis of dummy experiment results. Part III
involves reporting and reflecting on your experience.
Part 0: Prerequisite
You MUST complete the Cardiff University Research Integrity Online Training Programme
and submit your certificate with your report. You can do this by following this link to the
ethics webpage: https://www.cs.cf.ac.uk/ethics/. You may have completed this programme
already: if so, you should provide your certificate you received upon completion. NB: Do
NOT complete the application form the last stage in the flow chart on the ethics webpage
and do NOT submit to the SREC.
Part I: Experiment Design
In Part I your task is to specify how you would evaluate the usability of two working
computer systems of your choice in a human centred experiment. You should choose two
systems: the first is the ‘Candidate System’ and the second is the ‘Comparison System’. You
should try to pick these systems randomly where possible i.e., do not try to determine
which system is the ‘best’ before designing your experiment. This is important: the systems
you choose must be real systems (e.g., DuckDuckGo search engine, Learning Central). If you
can’t decide, then you should compare MS Word with LibreOffice Word. Your hypothesis
for this experiment is in the following quote. You must fill the blanks as appropriate.
“It is quicker to [do/complete ‘The Task’] and people rank
[Candidate System] higher than [Comparison System] using the
SUS because [reason based on observation].”
As part of the coursework, dummy data has been generated simulating several participants
completing your experiment, split into two groups: participants in group A performed ‘The
Task’ using the ‘Candidate System’ while participants in group B performed ‘The Task’ using
the ‘Comparison System’. The following measures were simulated in the experiment:

  • Time to complete (TTC): the total time taken to complete ‘The Task’.
  • Number of errors made (ERR): the number of errors made by participants during ‘The
    Task’, for example clicking on the wrong UI element or repeating steps in ‘The Task’.
  • SUS: a 10-item questionnaire (details below) rating the usability of the system taken
    after completing ‘The Task’.
    It is your responsibility to design the experiment individually or as a team considering these
    measures in your design. You are encouraged to meet in groups in the first instance to
    share ideas for an experiment design, however the report you submit in Part III MUST be
    your own work. NB: You must NOT conduct your experiment with any participants and/or
    collect data in any way. Doing so without prior ethical approval will result in a breach of the
    university’s ethics policy for human research: https://intranet.cardiff.ac.uk/staff/supportingyour-work/research-support/research-integrity-and-governance/research-ethics.
    Part II: Analysing Dummy Results
    Dummy data provided contains raw data for the three measures specified above. SUS is the
    usability score for the experiment based on the System Usability Scale (SUS). SUS scores do
    not tell us much by themselves. When interpreting SUS, we may rank the usability of a
    system compared to other systems developed in the past. To do so, Figure 1 shows
    historical data for 6 previous systems, with each datum calculated from the median of a set
    of individual SUS scores for several participants in a simulated experiment. Compute the
    median score for your dummy SUS data and compute the percentile rank (PR) for your
    systems using the data in Figure 1. What does this mean for each system? NB: if you use
    any spreadsheet software (e.g., MS Excel, Google Sheets), you may NOT use built in
    functionality/macros/APIs to compute the PR. You MUST compute it yourself and include it
    as an equation in your report. Likewise, for any other software or methods (e.g., python
    script) you choose to compute the percentile rank with. Questions about how to compute the
    PR will not be addressed: computing it correctly is being asked of you in this coursework.
    Figure 1: Historical SUS data
    SUS Score
    60
    80
    70
    30
    90
    85
    Identify the simulated participants’ SUS scores in each group whose SUS scores are at or
    above the 75th percentile with respect to your dataset. Ask yourself: “how do these
    simulated participants’ SUS scores relate to their raw TTC and ERR scores?”
    Part III: The Report
    Write an individual report containing detail of your experiment design. When writing your
    report, you MUST address the following questions as stated in the template document. The
    marking scheme is as included in parentheses. There are a total of 10 marks for this
    coursework.
  1. Write the hypothesis with the blanks filled (2 marks).
  2. Compute the SUS score percentile rank for your ‘Candidate System’ and ‘Comparison
    System’ (1 mark) and include the equation with terms explained (1 mark).
  3. Give a clear (e.g., step-by-step) description of your experiment procedure (3 marks)
    and comment on your design (3 marks). This should include ‘The Task’ you’ve chosen
    in Part I. You must justify all decisions in your procedure. There is a maximum of 400
    words for this question. This is important: if your submission exceeds this word count
    you may receive a mark of 0 for this coursework.
    You should use the criteria for assessment (Figure 2) as a set of guidelines for your writing.
    Learning Outcomes Assessed
  • Recognize the importance of identifying and involving users in the design and
    evaluation of interactive systems.
  • Practical skills for evaluating interactive software systems
    Criteria for assessment
    Credit will be awarded against the following criteria.
    Figure 2: Assessment Criteria
    Fail 3
    rd (40%-49%) 2.2 (50%-59%) 2.1 (60%-69%) 1
    st (70%+)
    No attempt made
    at computing SUS
    percentile rank
    Report is
    incomplete,
    incoherent, and
    lacking minimum
    requirements
    SUS percentile
    rank score is
    computed
    incorrectly
    The report
    contains some
    detail, but not
    enough to
    reproduce the
    experiment or it
    is unclear
    SUS percentile
    rank score
    computed
    correctly
    The report
    contains enough
    detail to
    reproduce the
    experiment
    procedure yet is
    lacking in depth
    and/or decisions
    are not justified
    satisfactorily
    SUS percentile
    rank score
    computed
    correctly
    The report is of a
    medium quality,
    containing
    enough detail to
    reproduce the
    experiment
    procedure
    approximately,
    with decisions
    justified
    satisfactorily
    SUS percentile
    rank score
    computed
    correctly
    The report is of a
    high quality,
    containing
    impressive detail
    to reproduce the
    experiment
    procedure
    exactly, with
    decisions
    masterfully
    justified e.g.,
    including
    discussion of the
    limitations of the
    experiment
    procedure

Feedback and suggestion for future learning
Feedback on your coursework will address the above criteria. Feedback and marks will
be returned by the date stated on the front page of this document via email. Feedback
from this assignment will be useful for CM3203: Individual Project.
Submission Instructions
You must submit an individual report via Learning Central. The report is subject to a strict
word limit specified below in Part II. You MUST use the template (.md) document provided
to write your report. Do NOT change the filename or file extension (.md) of this file. This is
important: any deviation from this may result in a mark of 0 for your coursework.
Appendices are not allowed. Any other compressed file format (e.g., .rar, 7z) is not allowed.
You must also submit a copy of the school cover sheet as stated above, and a completed
ethics certificate as specified in Part 0. When submitting, bundle your report text document
with your completed ethics certificate. Please follow the naming conventions shown in
Figure 3 when submitting your files. Delete the brackets [] around your student number.
This is important: any deviation from this may result in a mark of 0 for your coursework.
Figure 3: Key deliverables for coursework
Description Type Name
Individual report + Cardiff
University Research Integrity
Online Training Programme
Certificate
Compulsory One zip (.zip) file [student-number]-report.zip
Staff reserve the right to invite students to a meeting to discuss coursework submissions
Support for assessment
Questions about the assessment can be asked on https://stackoverflow.com/c/comsc/
and tagged with ‘cm2101’, or at the beginning of the lectures in Weeks 6 and 8. Support
for the assessment will be available in the lab classes in Weeks 6 and 8.