PYSC Final Review
2016-12-09 00:02:31 0 举报
AI智能生成
PYSC Final Review 是一个为 Python 学生和开发者设计的综合性复习课程。该课程旨在帮助学员巩固 Python 编程的基础知识,提高编程技能,并为进一步学习和实践做好准备。课程内容涵盖了 Python 的语法、数据结构、函数、类和对象等核心概念,以及文件操作、异常处理、模块和包等高级主题。通过实际案例和练习题,学员将能够熟练掌握 Python 编程技巧,提高解决问题的能力。此外,课程还强调了代码风格和最佳实践,帮助学员养成良好的编程习惯。总之,PYSC Final Review 是一个实用的 Python 复习课程,适合所有级别的 Python 学习者。
作者其他创作
大纲/内容
1. Human Abilities
Function
Sensation and Perception
seeing
hearing
Motor
Communication
Mental
Factors that impact functions
Impairment
Physical Environment Demands
Social Enviroment Demands
Task Demands
Impairment and Disabilities
Impairment = disability?
No impairment = no disability?
What is disability?
Consider both Personal + Environmental = Contextual Factors
Disability is a social construct created by the interaction of one's abilities with context
Design Strategies to Support Abilities and Function
Specialized Design
compensates for special abilities and functional limitations
Assistive Technologies
increase functional capabilities of individuals with disabilities
Accessible Designs
compensate for functional limitations by minimizing environmental demands on individuals with disabilities
子主题
Universal Design
promotes use by people with various abilities and large range of function levels though better design overall
Principles of Universal Design
Equitable use
Flexibility in use
Simple and Intuitive use
Perceptible information
Tolerance for error
Low physical effort
Size and space for approach and use
Accessible Design VS. Universal Design
What determines usability?
people's ability
design characteristics
task
2. Ethics
IRB issues
consent forms
statement that it's a research
purpose and procedures
health and financial risk
compensation/costs
confidentiality
in case of injury
subject rights
appropriate reading level
What is research?
Typically defined as a systematic investigation
on a subject that generally leads to the
production of explicit knowledge adding to the
existing body of knowledge about the subject
what is human subject
3. HCI Research
Found on empirical principles: rely on or derived from observation or experiments
HCI Research 特点
Inherent conflicts in HCI
complex
there is not one optimal solution
tradeoffs and multiple stakeholders with conflicting goals
Interdisciplinary nature
Each discipline has its own methods
Modified methods and created new ones
What does it look like
Emphasizes systematic observations of a sample of
individual behavior
interactions - among people, between people and objects
Uses various degree of "control"
descriptive or comparative research
matin reliability and validity
Uses common methods
Questions in general
what's useful
what's usable
Forms of research
Formative
predictive
through the lifecycle, early and iterative, evaluate the desigh
Summative
make judgments after it's finished, evaluate the implementation
Types of user research
descrptive
Describe a situation
relational
Identify relations between A and B
experimental
Identify causes of a situation
Research Process
Identify a problem
Type of variables
Independent Variables: independent of a user's behavior
Dependent Variables: dependent on a user's behavior or the changes of IVs
Third(confounding) Variable: anything else that you are not manipulating
Research Questions
Hypothesis and predictions
needs to be testable, operationally defines your variables, describes provisonal relationships between IVs and DVs
Research Plan
Samples
Other concerns
Experimental Bias - see what you only want to see
Participant Bias: participants want to please the researcher
Placebo Effect: Tends to think something better when told so
Gather info/data/evidence
what to measure, will them help evaluate your hypothesis
validity / reliability
external validity: will it work outside like it did in a controlled lab
Analyze
Answer the research questions
4. Existing Knowledge
literature review
patent searching
competitive analysis
documentation mining
How the current tasks should be done, standard policies, manuals, histories, best practices
data logging/ analytics
As part of usability testing
5. Oberservation
Various Forms
Standalone / Part of a method
Controlled environments / Field visits
Casual / Structured
Aware / Unaware
More/ Less intrusive
In-person/ Remote
Avoid seeing only what you want to see
Techniques
Try not to interfere
Notes and sketches
Position yourself to really see what happens
Create a plan
Task Analysis (to optimize procedures)
Begin with observation, to see what tasks you are going to support
Where and when - prior to design
Data you need to describe current tasks, methods to gather them, how to present the data, use the data to improve the design
Think about
Requirements to perform a task
info, equipment need to have when they start
Input and output interfacing
sensory, motor, cognitive, communication
Conditions under which task is done
Results and Outcomes - how will they know when the task is complete
Representing Task Analyses
Task Outlines
Hierarchical Task Analysis / Entity-Relationship Diagrams
Flow Charts
Narratives
6. Interviews
Various forms
Stand alone/ combined with other methods
Controlled environments / field vists
In-person / Remote
Individual / Group
Users/ Other stakeholders
Traditional Interview (Decreasing control. potential richness)
Structure Interview
exact the same questions in the same order
more like a survey
Semi-structured Interview
followed a guide
go with the flow
Unstructured Interview
inefficient
good for exploration
Contextual Inquiry
Talk to customers about their work while they work
Users and researcher collaborate to understand the user's work
Users and researcher collaborate to understand the user's work
Stay concrete, don't abstract
we usually get reports by mail - do you have one, can I see it?
Collect ongoing work, not summary experience
Partnership
the user is the expert
Help the user articulate and see their work practice
Intepretation
Assign meanings to observations
Assign meanings to observations
Create a shared understanding
Offer interpretations, not just open-ended questions
Inquire into the meaning of customer action and words
Be honest
Focus
Know your focus
Challenge your assumptions
Dos and don'ts
Where and when in research
Good for exploratory work
Gain feedback about design concepts
Broadly understand information architecture and content strategy needs
As part of usability testing
Broadly understand information architecture and content strategy needs
As part of usability testing
To find out - attitudes, behaviors, how, opinions, comparison to others
Not to find out - predict the future, how they design, hypothetical situations
Not to find out - predict the future, how they design, hypothetical situations
Interview Conditions
Planning
Research Questions
Avoid
Jargon, Slang
Close-ended questions - if so, followed with "why"
Biased or leading questions
suggest an answer to the question
provide reasons for something doesn't work
provide reasons for something doesn't work
Predictions about future
Emotion-focused question
ask about behaviors, not feelings
Imagining themselves in hypothetical scenarios
Guidance on specific design needs
Example
1. Would you use X feature?
• Typically answered with a yes or no
• People are bad at predicting their future behavior
2. What would you need in X app?
• People don’t typically ask for the “right” thing, especially when their need is abstract
• Focus on “why” questions instead. “What feature would solve this problem,” should be phrased as
“tell me more about this problem.”
• It is the designer’s role to extract insights from the interviewee
3. What are your goals, motivations, and pain points?
What makes this question bad?
• The purpose of a user interview is to derive a user’s goals, motivations, and pain points, but you
should never ask this directly – these things lay beneath the surface of a person’s consciousness
• Typically answered with a yes or no
• People are bad at predicting their future behavior
2. What would you need in X app?
• People don’t typically ask for the “right” thing, especially when their need is abstract
• Focus on “why” questions instead. “What feature would solve this problem,” should be phrased as
“tell me more about this problem.”
• It is the designer’s role to extract insights from the interviewee
3. What are your goals, motivations, and pain points?
What makes this question bad?
• The purpose of a user interview is to derive a user’s goals, motivations, and pain points, but you
should never ask this directly – these things lay beneath the surface of a person’s consciousness
Ask: Open Ended Questions : can't be answered with yes or no
Three awesome questions
What are you trying to get done?
How do you currently do this
What could be better about how you do this?
Recruiting - screening
Turn off your Assumptions
Rapport
Listening
Notetaking tips
annotate your notes
Respond to their responses
Support interviews with objects and artifacts
Photo Elicitation
use photos to stimulate vivid, concrete, meaningful information
Whats the first thing comes to your mind
what you would like to use to describe how you could feel if you were part of this scene
7. Focus Group
purpose is to collect data
Just use group interaction to elicit information from the members
Homogeneity= participants share similar relevant characteristics ⤴️ backgrounds and experiences, success of the group ⤴️
Pros and Cons
Pro
Understand perceptions, beliefs, opinions of wide variety of participants
Depth of information
Influence of the group context
Flexible and dynamic with a relatively low cost
Useful for exploratory initiatives
Con
Requires skills
Make sense of the data
Time and effort of researchers
challenging group dynamics
focus group can influence individual responses
format can be challenging
Not so good for
Closed-ended questions
assessments with statistical data
participants are not comfortable with each other
group may sway individual opinion
unsafe or confidential enviroments
situations that are emotionally changed
Location
Structure
Ground rules
Moderator role
Wants/Needs Analysis
"moderated" focus group with brainstorming and prioritization
i wish / how to " represents needs and ideas
8. Qualitative Data Analysis
What is qualitative data?
transcript of interviews, focus groups, observations
related to concepts, attitudes, opinions, values and behavirous
not numbers
What is qualitative data analysis?
to make researchers familiar with the data
Analysis Approaches
deductive 演绎
use your research questions, theory to group the data or develop codes
inductive 归纳
use an emergent framework to group data and then look for relationships
Most common approaches
grounded theory
aims to derive theory from systematic analysis of data
Coding: categorization approach
Three levels of coding
Open
Identify categories
Axial
flesh out and link to subcategories
Selective
from theoretical scheme
contextual inquiry
Contextual design process
Contextual interviews- interpretation sessions- affinity mapping- visioning
Affinity Process
Affinity notes are about what participants do, how they feel and what they want
Affinity mapping is hierarchically group affinity notes from interpretation sessions
The process
1. Raw data : data cleaning
2. Data reduction: Chunking and coding
3. Interpretation: coding and clustering
4. Representation: making sense and storytelling
code and count
etc
9. Quantitative data analysis
have your analysis plan beforehand
data types
discrete data
nominal (categorical data - number of occurrences of each qualitative data code)
ordinal (rank order data)
continuous data
interval
without a meaningful zero point - rating scale responses
ratio
zero has a meaning - heart rate
population and samples
parameters and statistics - 母体特征和样本统计量
descriptive and inferential statistics - 描述性统计-characterize a sample, but don't draw conclusions about the wider population 推论统计-predict the values of features of an entire population
histograms - 直方图
a diagram consisting of rectangles whose area is proportional to the frequency of a variable and whose width is equal to the class interval.
some descriptive statistics
central tendency
mean
median
center value
mode
most commonly occuring balue
spread/dispersion
variance
standard deviation
shape
bimodality/unimodality
two or one peak
skewness - how asymmetric is the distribution 偏斜
positive
long tail in the positive direction
negative
kurtosis 峰度
the flatness of the distribution
visually represent multiple distribution - Box and Whisker Plots
outliers - extreme data points
error
algorithmic classification
how to treat them
Bar graph with standard deviation
standard deviation (样本的), plotting what we expect a population to look like
correlational statistics
direction
person correlation p
-1 perfect negative correlation
1 perfect positive correlation
0 no relationship
1 perfect positive correlation
0 no relationship
strength
R^2 balue
1 - A fully explains B. 0 - no relationship
influential outliers
remove
form of relationship
!
assumptions when using a survey
respondent will give truthful answers
they can give truthful answers
they understand your questions
pros and cons
Advantage
cost
large sample size
ease of completion
possible less bias than face to face interview
timely
uniform question presentation
anonymity and confidentiality possible
anonymity
can't associate any response with a specific participant
confidentiality
can - used to track data, but only group results are reported, individual data not reported
if anonymity not guaranteed
result in misrepresentation of income, job, age, education level. and special issues like drug use, hours worked
disadvantage
low response rate
reliability and validity issues
question limitations
prejudice against questinnaires
impersonal 不客观
possible sample limitations
knowing who completes questions items interdependence
questions
wording
avoid leading and loaded questions
avoid agreement bias
full sentences are more likely to elicit agreement because they are posted as statement of fact
Georgia Tech is a satisfactory program - agree or disagree
Georgia Tech is a satisfactory program - agree or disagree
avoid use of negation - 双重否定
avoid asking double questions
avoid "giveaway" words, undefined terms and ambiguous questions
be more specific about frequency and duration
common scales
open ended
always ask them to explain why
Pros and cons
Pro
raise issues not addressed by survey designer
useful to determine range of responses
help to determine respondents' frame to reference
Con
Time consuming to answer
Difficult to scope
Response can be misinterpreted
may generate repetitive material
fill in the blanks
Binary/multiple
(by adding "additional options")
(by adding "additional options")
pro
easy to complete
everyone has a same reference point
easily scored
con
can't say how much better one choice is than another
response bias set
response alternative may overlap
designers need to know full range of responses
Forced choice items
pro
more resistant to biases and response sets
can force thoughtful response
con
frustrating
hard to develop properly
time consuming to complete
Rating scales
pro
reflects both direction and magnitude of opinion
some relative measure of distance between ratings
relatively easily answered
applicable to many issues
con
vulnerable to bias built into stem and other biases
recommendations
response options must be clearly different
use standard tems
use a natural rating order
whichever order you choose, be consistent throughout the questionnaire
ranking
pro
familiar to respondent
easy to score
con
produce high error rate
doesn't allow respondent to tell the difference between items
less precise than ratings
must read the entire list
generally better to use rating scales unless rank data are adequate
use the minimun number of response categories per question
use the minimun number of response categories per question
checklist
semantic differential
put an X in the rows
Better accessibility
Content of the survey questions themselves, exclude
only have a visual referent if not necessary
might be difficult for AT to interpret
Avoid JavaScript based questions
grid questions
avoid hidden questions
inclusive language
Instructions govern the survey's behavior
visual design
hight contrast
larger font size
obvious buttons for actions
are respondents who complete the survey representative?
better approach
distribution methods
internet-based
do they have internet access?
bias towards internet users ?
perceptions of internet security and privacy concerns
designed for larger displays
Pre-testing / pilot
Test your survey
check for accessibility
check for readability and interpretation
depends on its content and typography
check for timing
survey to inform survey
use an open-ended survey to discover most common responses
add "other", if more than 10% of respondents use "other"
plot the question strategy and design
11. Behavioral Persona
12. Statistics II
four steps in hypothesis testing
Type I Error and Type II Error
Type I: false positive 算出显著性差异 拒绝了H0 但是实际上H0 IS TRUE
Type II: error/miss 算出没有显著性差异 接受了 H0 但是实际上有 Ho is false
Null and alternative hypotheses
Null (H0): There is in fact no difference between the different levels of an independent variable in terms of their effect on a dependent variable
Alternative (H1): there is in fact one
sampling distribution
central limit theorem
平均值就是Mean,满足正态分布
足够多的样本就会变成population distribution
P value
the probability that we would make a Type I Error if we claim the alternative hypothesis is sture
the probability that the new sample is in fact from the original population, but we mistakenly conclude that it is from the alternative distribution
0.05 - decidion threshold
the probability that the new sample is in fact from the original population, but we mistakenly conclude that it is from the alternative distribution
0.05 - decidion threshold
We mean that it is at least 95% likely that two groups, perhaps Design A and Design
B, are different in the population in terms of the dependent variables. We accept that
there is a < 1 in 20 chance that we are mistaken, and they are not in fact different in
the population.
B, are different in the population in terms of the dependent variables. We accept that
there is a < 1 in 20 chance that we are mistaken, and they are not in fact different in
the population.
Increasing statistical power
lower the risk of Type II Error
lower the risk of Type II Error
increase the sample size
increase the size of an effect (make design A more different from design B)
decrease the random error component of the sample standard deviation (design a better scale rating survry)
difference between a significant effect and large effect size
an effect size is a quantitative measure of the strength of a phenomenon.
if the result is meaningful
significance testing is all-or-nothing. a different is either statistically significant or not. but the fact that two groups are statistically significantly different doesn't mean there is necessarily that large of a difference
13. Persona
Rather than traditional personas, we built behavioral ones [14].
The reason was that this project focused on
improving and getting a better understanding of an existing system’s
performance for a given audience [15]. Our users are
relatively well-defined. Behavioral personas tell us what our users do and how
well they work with the current information gathering process when selecting a
course.
typical user concluded from user profile
14. Design Iteration - From evidence to design
Five panel of UX
Strategy
user needs, product objectives
strategy document
market research, competitive analysis, focus group, interviews, call logs, surveys,
Scope
features and functions
functional spefications.
content requirements: text images audio video, management system, etc
requirement document - be positive, be specific, avoid subjective language
contextual inquiry, contene analysis, interviews, focus group, surveys, comptietive analsyis, analytics
Structure
flow
interaction design
perform and complete task
describe user behavior and how system responds to that behavior
conceptual model: goal: consistency
information architecture
convey info to user
paper prototype, storyboard
paper prototype interview, card sorting, treejac
efficient and inexpensive
Skeleton
placement of buttons and controls
information design
information design
interface design
navigation design
wireframe prototype. specification
user testing
Surface
visual, images, aesthetics
sensory design
functional prototype, design documents
user testing
Functionality or Information
15. Research Design Research
Research
Design
Research through Design - use design to ask questions.
Iteration design research
Design practice
does so with a research in mind
Design studies
what if - design becomes a statement of what's possible
Design exploration
discussions about design theory, methodology and history, philosophy
Design to create knowledge
lab: rich interaction design
Showroom: critical design
Field: participatory design
bring users into the creative process. to see the world through their perspective
PD as research
to understand how people truly think about a given problem or technology
when you think what users say and actually do are not the same
when you feel that there is a disconnect between you and the end user
PD Tools. methods. slides
16. Design a Study
Bias
Primary Effect
Recency Effect
Memory
the order in which you present or ask things can affect the results
Order Effect
people get tired, knowledge of the earlier prototype may affect later performance,
randomize the order of tasks and prototypes
Effect of the Researcher
Placebo & Bias
Unwanted Bias
Hawthorne Effect (Observer Bias)
participants try to improve their performance because they know they are being observed
Training and Practice Effect
Between subjects or within subjects
Between
Each participants receives a different condition, a comparison is made from the data between subjects
fewer recency effect, simple design and analysisi, easier to recruit participants, less efficient
Within
Each participant receives both conditions, so that they act as their own control and comparison can be amde between different levels of IV without worry for individual diffferences
more efficient, stastistical power, require fewer participant, more complicated, design to avoid recency effects
A/B testing
Include
research question
participants
procedure
measures
17. Accessibility
WCAG 2.0
P perceivable
small screen size
zoom/magnification
contrast
O operable
keyboard control for touchscreen devices
touch target size and spacing
touch screen gestures
placing buttons where they are easy to access
U understandable
changing screen orientation
consistent layout
position important page
group operable elements perform same action
provide clear indication that elements are actionable
provide instructions for custom touchscreen and device manipulation gestures
R robust
set virtual keyboard to type of data entry required
provide easy methods for data entry
support the characteristic properties of platform
basic principles of accessible web content
1. accessibility statement
2. Alt Tags
3. Color and contrast
high contrast color scheme
background does not overpower text
color schemes used consistently
avoid color coding
4. hyperlinks
should make sense out of context, describe the destination, unique for each unique destination
5. accessible multimedia
text transcript
video description
closed captions
accessible media player
6. readability
7. tables
ARIA
javascript cannot communicate with accessibility tree - add ARIA to the code to identify properties, relationships and states
mobile more important than ever
screen reader
magnifier
color settings
text settings
captioning and video
description
18. Evaluation
what is evaluation
gather data about the usability of a design for a particular activity by a specific group of users within a specified enviroment
goals of evluation
assess extent of system's functionality
assess effect of interface on user
identify specific problems with system
styles of evaluation
formative (predictive)
all through the lifecycle
summative evluation
make judgments about the final item
experimental/empirical approach
lab studies, quantitative results
manipulate IV to see effect on DV
replicable but expensive
naturalistic approach
field studies, qualitative results
observation occurs in real life setting
ecologically valid
cheap and quick
not reproducible, yield user-specific results
not quantitative
predictive- without users
interpretive evaluation (naturalistic)
predictive evaluation
user used
heuristic evaluation
several experts assess system based on simple and general heuristics
visibility of system status
aesthetic and minimalist design
user control and freedom
consistency and standards
error prevention
recognition rather than recall
flexibility and efficiency of use
recognition, diagnosis and recovery from errors
help and documentation
match between system and real world
perform 2+ passes through the system, find problems. inspect each screen and between screens
then decide what are or not problems, group and structure
severity ranking 0-4 based on frequency, impact, persistence and market impact
pro and con
pro
cheap and good for small companies who can't afford more
get someone practiced in method is valuable
con
a little subjective
why are these controversial
some identified problems really aren't
discounting usability testing
hybrid empirical usability testing and heuristic evaluation
have two or three think-aloud user sessions with paper or prototype mock-ups
cognitive walkthrough
assess learnability and usability through simulation of the way users explore and become familiar with interactive system
task-specified
review actions needed for task and predict how users would behave and what problems they will encounter
need to have
users description and their profile
tasks
complete list of the actions needed to complete tasks
prototype or description of the system
ask four questions to construct a believability story
will users be trying to produce whatever effect the given action has
will users be able to notice that the correct action is abailale
once found, will they know it's the right action for the desired effect
will users understand feedback after the action
literature-based evaluation
many system exist have been evaluated. experiments have shown performance abilities and limits. apply what is already known
model-based evaluation
computer simulate what users would do and how they would respond
not so effective at modeling perception
Fitts Law
formal evaluation
questionnaires
think-aloud / cooperative evaluation
experimental/usability test
empirical evaluation
determine the task
goal
task scenarios
benchmark tasks
gather quantitative data
specific, clearly stated tasks for users to carry out
representative tasks
add breadth, can help understand process
make the task realistic
make the task actionable
to do the action, instead of asking a question
avoid clues and steps
performance measures
experiment
Variables
hypotheses
IRB
participants
must fit in user population - validity
screening
set inclusion/exclusion criteria
include distractor answers and questions
data
analyze
conclusion
redesign and implement
19. Metrics and Physiology
Task performance metrics
task success
frequency of task completion
task time
time taken for completion
have a ceiling effect
errors
frequency of errors by task
efficiency
mental and physical effort required
often # of steps
learnability
performance improvement over time
some performance metrics are more diagnostic indicators of the variable pf interest than others
Ceiling or floor effect can occur when users never perform higher or lower than a certain threshold
Issued-based metrics
informal
Subjective reaction metrics
reflect on their interaction with a system.
think-aloud protocol
difficult to get people to actually do this- especially when the task gets hard
can be combined with other metrics to help observed data tell a story
self-report metrics
users reflect and rate aspects of their interaction with the system
e.g Semantic differential scale
surveys
usability surveys
workload surveys
time, mental effort, psychological stress load
engagement surveys
behavioral & physiological metrics
eye tracking
eye movement
goals
visually salient elements
fixation: moments the eye pause to take in and process info
saccade: quick, partially involuntary eye movement used to shift the eye toward a nearby area that may hold useful info. followed by another fixation
a visit: humans often saccade into a general area, then saccade and fixate on lots of nearby areas within that area. once sufficient info has been acquired, the eye saccade away from that area
AOIs: areas that users visit tend to correspond with elements of visual display.
AOIs: areas that users visit tend to correspond with elements of visual display.
procedure
define AOIs
calibrate eye tracker
record data
generate visualizations as well as numerical data for each AOI
automated cluster AOIs
pros and cons
metrics generated for each AOI
fixation count
total number of times the eye fixated within an AOI
are users spending time to consider content in an AOI? are there a lot of separate info elements in that AOI
average fixation length
are users spending a lot of time processing each info element in an AOI
time until first fixation
is an AOI being seen as soon or as late as the designer wants?
average visit length
how important? how condusing? how much valuable info?
total visit time
visit sequence
hit rate
percentage of users who looked at least once in an AOI
visit count
visualiztion
focus on sequence
focus on time amount users view different content
combining subjective reactions and eye tracking
retrospective think aloud
users are shown the video of their eye movements during a task and aksed to comment on what they were thinking at that time
physiological measurements
continuous, unobtrusive, quantitative and unbiased reflection of the user's internal state during a task
measure physiological arousal, which comes about through the activity of the autonomic nervous system
sympathetic nervous system
easy to measure
increase heart rate, increase rate of breathing,
blood go to muscle and brain: constricts blood flow to nonessential organs like digestive organs and the skin, sweat response
blood go to muscle and brain: constricts blood flow to nonessential organs like digestive organs and the skin, sweat response
perceived threat
electodermal activity EDA - increased sweat causes increased conduction of electricity
most important!
difficult when hand and foot moves near the sensor
Skin temperature SCN
cooler - blood flow to skin is decreased
difficult : change in enviromental temperature
Respiration rate
increase the rate of breathing
difficult: physical activity
Heart rate variability HRV
increase
Heart Rate ECG.EKG
increase
difficult - muscle movements
pupil size ⤴️
light changes
arousal vs workload
increase in cognitive workload co-occur with sympathetic nervous system response, raising physiological arousal
measure cognitive activity directly
EEG - pick up brain activity as a whole in real time
beta/ alpha, more beta waves, higher mental activity
difficult- hair. lack of solid skin contact
FMRI - temporal resolution, spatial resolution, but you are in a giant magnet
fNIRS- portable without giant magnet - difficulty: sunlight
sense affect: valence
facial expression recogntion
biometric storyboards
live website data
Google Analytics
information architecture metrics
Tree testing, card sorting
accessibility metrics
子主题
composite scores
20. Usability Testing
The systematic observation of end users attempting to
complete a task or set of tasks with your product
based on representative scenarios.
task should be a representative sample of typical things a user might do, reasonable
tasks should be described in terms of a person's goals and motivations not the system's
tasks must be possible, with a definable success/fail conditions
F: They want to give up, they reaches wrong but think they are right, or the opposite
define errors
have a specific end goal
in a realistic sequence
require an appropriate amount of expertise in the domain
verbal protocol
think-aloud
problems: awkward, modify wau users perform a task, won't describe all the thoghts
post-event protocol (retrospective think-aloud)
difficult to recall
notes
paper
miss things, slow, cheap and easy, ultimately less work
record
good for talk aloud,, hard to tie to the interface, multiple cameras probably needed, good, rich record of session, intrusive
software logging
too low-level, massive amount of data, need analysis tools
sensors and physiological logging
good at reflect users' internal state, hard to interpret, requires participants wear strange devices
Hawthorne Effect (Observer Bias)
remote testing
can't probe into unexpected results and insight
can't change the script on the fly
can't corral the participant if she gets off track
ay not able to give them the design because of security concetn
can't observe body language
distracted
cheaper, reduce moderator bias, more natural environment, get more data quicker, easier to get a large sample size, wider geographical reach
21. Business Model Canvas
收藏
收藏
0 条评论
下一页