Topic 1

Understanding Research Design

Research design is the overall strategy you use to answer your research questions. It's the blueprint for your study that specifies how you'll collect and analyze data. Choosing the right design is crucial—it determines what conclusions you can draw and how confidently you can draw them.

What is Research Design?

Definition

Research design is a comprehensive plan that outlines the procedures for conducting a research study. It includes decisions about:

  • What data to collect
  • Who to collect data from
  • When to collect data
  • How to collect data
  • How to analyze data

Why Research Design Matters

Ensures Validity

Good design ensures your findings accurately reflect reality and aren't due to chance, bias, or confounding variables.

Enables Replication

Clear design allows other researchers to replicate your study and verify findings, which is essential for scientific progress.

Maximizes Efficiency

Well-planned design helps you collect the right data efficiently, avoiding wasted time and resources on irrelevant information.

Determines Conclusions

Your design determines what types of conclusions you can legitimately draw—causal, correlational, or descriptive.

Key Elements of Research Design

1

Research Paradigm

The philosophical foundation guiding your research approach

Positivist/Post-Positivist

Objective reality exists; seeks to discover truth through empirical observation

Typical approach: Quantitative, hypothesis-testing

Constructivist/Interpretivist

Reality is socially constructed; seeks to understand meaning and interpretation

Typical approach: Qualitative, exploratory

Pragmatist

Focus on what works; uses whatever methods best answer the question

Typical approach: Mixed methods, flexible

2

Research Purpose

What you're trying to accomplish with your research

  • Exploratory: Investigate little-understood phenomena
  • Descriptive: Describe characteristics or phenomena
  • Explanatory: Explain relationships or causation
  • Evaluative: Assess effectiveness of programs/interventions
3

Time Dimension

When and how often you collect data

Cross-Sectional

Data collected at one point in time

Example: Survey students about stress levels during finals week

Longitudinal

Data collected at multiple time points

Example: Track student development over four years of college

4

Data Type

The nature of information you'll collect

Quantitative
  • Numerical data
  • Statistical analysis
  • Objective measurement
  • Large samples
  • Generalizability focus
Qualitative
  • Textual/visual data
  • Thematic analysis
  • Subjective interpretation
  • Smaller samples
  • Depth and context focus
Mixed Methods
  • Both types of data
  • Integration of findings
  • Complementary strengths
  • Comprehensive understanding
  • Triangulation
5

Level of Control

How much you manipulate or control variables

High Control

Experimental designs

Manipulate variables, control conditions

Moderate Control

Quasi-experimental

Some manipulation, less control

Low Control

Non-experimental

Observe naturally occurring phenomena

Major Research Design Categories

Experimental Designs

Goal: Establish cause-and-effect relationships

Key features:

  • Random assignment to conditions
  • Manipulation of independent variable
  • Control of extraneous variables
  • Comparison between groups

Strength: Can establish causation

Weakness: May lack real-world applicability

Quasi-Experimental Designs

Goal: Study causal relationships when randomization isn't possible

Key features:

  • Manipulation of independent variable
  • Pre-existing groups (no random assignment)
  • Comparison between groups
  • Often includes pre-test/post-test

Strength: More practical than true experiments

Weakness: Cannot rule out all alternative explanations

Non-Experimental Designs

Goal: Describe phenomena or examine relationships without manipulation

Key features:

  • No manipulation of variables
  • No random assignment
  • Observation of naturally occurring phenomena
  • Can be quantitative or qualitative

Strength: Studies phenomena as they naturally occur

Weakness: Cannot establish causation

Design Follows Question

Your research question should drive your design choice, not the other way around. If your question asks "Does X cause Y?", you need an experimental design. If it asks "What is the relationship between X and Y?", a correlational design may be appropriate. If it asks "What are people's experiences of X?", you need qualitative methods.

Common Mistake: Causal Language Without Causal Design

Researchers sometimes use causal language ("X affects Y" or "X causes Y") when describing non-experimental findings. This is incorrect! Only experimental designs with random assignment can establish causation. With correlational designs, use language like "X is associated with Y" or "X relates to Y."

Topic 2

Experimental Designs

Experimental designs are the gold standard for establishing cause-and-effect relationships. They involve manipulating one or more independent variables and measuring their effect on dependent variables while controlling extraneous factors. This topic covers the essential characteristics and types of experimental designs.

The Three Criteria for True Experiments

A study must meet ALL three criteria to be considered a true experiment:

1

Manipulation

The researcher actively manipulates or changes the independent variable (IV)

Example: Researcher assigns some students to receive tutoring (experimental group) and others not to receive it (control group). The researcher controls who gets what treatment.

2

Control

The researcher controls extraneous variables that might affect the outcome

Example: All students take the same test at the same time in the same environment. Only the tutoring differs between groups, so any difference in performance can be attributed to tutoring, not other factors.

3

Random Assignment

Participants are randomly assigned to experimental or control conditions

Example: Students are assigned to tutoring or no-tutoring groups by chance (e.g., coin flip, random number generator), not by choice or pre-existing characteristics. This ensures groups are equivalent at the start.

Random Assignment vs. Random Selection

Random Selection: How you choose participants from the population (affects generalizability)

Random Assignment: How you assign selected participants to groups (affects internal validity and causation)

You need random assignment for a true experiment, but random selection is ideal though not always required.

Types of Experimental Designs

1. Randomized Controlled Trial (RCT)

Gold Standard

Sample

N participants

Random Assignment

Experimental Group

Receives treatment

Post-test

Control Group

No treatment/placebo

Post-test

Compare Groups

When to use: Testing intervention effectiveness, comparing treatments

Example: Testing if a new study technique improves exam scores compared to traditional methods

Advantages:
  • Strongest causal inference
  • Controls for confounding variables
  • Groups equivalent at baseline
Limitations:
  • Can be expensive and time-consuming
  • May lack ecological validity
  • Ethical concerns in some contexts

2. Pretest-Posttest Control Group Design

Enhanced Control

Sample

Random Assignment

Experimental

Pretest

Treatment

Posttest

Control

Pretest

No treatment

Posttest

When to use: When you want to measure change and ensure baseline equivalence

Example: Measuring anxiety levels before and after mindfulness training

Advantages:
  • Measures actual change
  • Can verify group equivalence
  • Controls for maturation and testing effects
Limitations:
  • Pretest may influence posttest
  • More time and resources needed
  • Potential attrition between tests

3. Factorial Design

Multiple IVs

Tests effects of two or more independent variables simultaneously

2×2 Factorial Design Example:

Research Question: Does study method and time of day affect test performance?

Morning Study Evening Study
Flashcards Group 1 Group 2
Practice Tests Group 3 Group 4

Participants randomly assigned to one of four conditions

When to use: Examining multiple factors and their interactions

Advantages:
  • Examines main effects of each IV
  • Tests interaction effects
  • More efficient than separate studies
  • Reflects complexity of real world
Limitations:
  • Requires larger sample size
  • More complex analysis
  • Interactions can be difficult to interpret

4. Within-Subjects (Repeated Measures) Design

Same Participants

Same participants experience all conditions

Participants

Condition A

Measure DV

Condition B

Measure DV

Condition C

Measure DV

Compare

When to use: Comparing multiple treatments, tracking change over time

Example: Testing reaction time under no caffeine, moderate caffeine, and high caffeine conditions (same people tested in all three)

Counterbalancing

Use counterbalancing to control for order effects: Different participants experience conditions in different orders (ABC, BCA, CAB, etc.)

Advantages:
  • Fewer participants needed
  • Each person is their own control
  • More statistical power
  • Individual differences controlled
Limitations:
  • Order/practice effects possible
  • Carryover effects between conditions
  • Not suitable if treatment has lasting effects
  • Can be time-consuming for participants

Control Groups: Types and Functions

No-Treatment Control Group

Receives no intervention at all

Use when: Testing if any intervention is better than none

Placebo Control Group

Receives inactive treatment that mimics the real intervention

Use when: Controlling for expectation effects

Example: Sugar pill that looks like real medication

Attention Control Group

Receives equal attention/time but different content

Use when: Isolating the specific treatment effect from general attention

Example: Support group meetings without the specific therapeutic technique

Wait-List Control Group

Will receive treatment after study ends

Use when: Ethical concerns about withholding beneficial treatment

Example: Testing therapy effectiveness where denying treatment would be unethical

Laboratory vs. Field Experiments

Laboratory Experiments: Conducted in controlled settings (labs). High internal validity but may lack real-world applicability.

Field Experiments: Conducted in natural settings while maintaining experimental control. Better ecological validity but harder to control extraneous variables.

Choose based on your priorities: internal validity vs. external validity.

Topic 3

Non-Experimental Designs

Non-experimental designs observe and measure variables without manipulation or random assignment. While they cannot establish causation, they're essential for studying phenomena that can't or shouldn't be manipulated, and they provide valuable descriptive and correlational insights.

When to Use Non-Experimental Designs

Manipulation Not Possible

Some variables cannot be manipulated by researchers

Examples:

  • Gender, age, ethnicity
  • Personality traits
  • Past experiences (trauma, education)
  • Genetic factors

Manipulation Not Ethical

Some manipulations would cause harm

Examples:

  • Cannot randomly assign people to smoke
  • Cannot deliberately expose children to abuse
  • Cannot withhold needed medical treatment
  • Cannot induce mental illness

Exploratory Research

Initial investigation of new topics

Examples:

  • Understanding experiences or perspectives
  • Identifying relevant variables
  • Generating hypotheses for future testing
  • Describing emerging phenomena

Practical Constraints

Resources or feasibility limitations

Examples:

  • Limited budget or time
  • Cannot access experimental conditions
  • Large-scale phenomena (historical events)
  • Rare populations

Types of Non-Experimental Designs

1. Correlational Research

Quantitative

Purpose: Examine relationships between two or more variables

What Correlations Tell Us:
  • Direction: Positive (both increase) or negative (one increases, other decreases)
  • Strength: How closely related (correlation coefficient: -1 to +1)
  • Significance: Whether relationship is likely real or due to chance
Example Study:

Research Question: Is there a relationship between hours of sleep and academic performance?

Method: Survey 200 students about sleep hours and obtain their GPAs

Potential Finding: Positive correlation (r = .45, p < .001) - more sleep associated with higher GPA

Interpretation: Sleep and GPA are related, but we cannot conclude sleep causes better grades

Correlation ≠ Causation

Three possible explanations for correlation between X and Y:

  • X causes Y
  • Y causes X
  • Third variable Z causes both X and Y

Example: Ice cream sales correlate with drowning deaths. Ice cream doesn't cause drowning—both increase in summer (third variable: warm weather).

2. Cross-Sectional Survey Research

Quantitative

Purpose: Describe characteristics of a population at one point in time

Key Features:
  • Data collected once from each participant
  • "Snapshot" of current state
  • Can compare different groups
  • Often uses questionnaires or structured interviews
Example Study:

Research Question: What percentage of university students experience anxiety?

Method: Administer anxiety scale to 1,000 students

Findings: 35% score above clinical threshold; rates higher in first-year students

Advantages:
  • Quick and cost-effective
  • Can study large samples
  • Good for prevalence studies
  • Multiple variables measurable
Limitations:
  • Cannot establish causation
  • Cannot track change over time
  • May have response bias
  • Self-report accuracy concerns

3. Longitudinal Research

Quantitative

Purpose: Track changes in the same individuals over time

Panel Study

Same participants measured multiple times

Example: Following cohort of students from freshman to senior year

Trend Study

Same population sampled at different times (different people)

Example: Surveying college freshmen every year (different students each time)

Cohort Study

Specific group followed over extended period

Example: Following all children born in 2000 through adulthood

Example Study:

Research Question: How does self-esteem change during college?

Method: Measure self-esteem in same 500 students each year for 4 years

Findings: Average increase, with largest gains in year 1

Advantages:
  • Can examine change and stability
  • Stronger than cross-sectional for causal inference
  • Identify developmental patterns
  • Control for individual differences
Limitations:
  • Time-consuming and expensive
  • Participant attrition
  • Practice/testing effects
  • Historical confounds

4. Case Study Research

Qualitative

Purpose: In-depth examination of a single individual, group, event, or phenomenon

Data Collection Methods:
  • Interviews
  • Observations
  • Document analysis
  • Archival records
  • Physical artifacts
Famous Case Studies:
  • Phineas Gage: Brain injury case advancing neuroscience
  • Little Albert: Classical conditioning of fear
  • Patient H.M.: Memory research after hippocampus removal
Advantages:
  • Rich, detailed information
  • Studies rare or unique phenomena
  • Generates hypotheses
  • Practical and flexible
Limitations:
  • Cannot generalize to populations
  • No control over variables
  • Researcher bias possible
  • Causation unclear

5. Ethnographic Research

Qualitative

Purpose: Understand culture and lived experiences through immersion in natural settings

Key Characteristics:
  • Prolonged engagement: Extended time in field (months to years)
  • Participant observation: Researcher becomes part of community
  • Multiple data sources: Fieldnotes, interviews, artifacts
  • Emic perspective: Understanding from insider's viewpoint
Example Study:

Research Question: How do medical residents learn professional culture?

Method: Researcher spends 18 months observing and interviewing residents in hospital

Findings: Identify informal learning processes, hidden curriculum, and identity transformation

6. Phenomenological Research

Qualitative

Purpose: Understand the essence of lived experiences

Approach:
  • In-depth interviews with individuals who experienced phenomenon
  • Focus on "what" and "how" of experience
  • Bracketing researcher's assumptions
  • Identifying common themes across participants
Example Study:

Research Question: What is the lived experience of being diagnosed with cancer?

Method: Deep interviews with 15 recently diagnosed patients

Findings: Themes of loss of control, re-evaluating priorities, seeking meaning

7. Grounded Theory

Qualitative

Purpose: Develop theory grounded in systematically gathered data

Iterative Process:
  1. Collect data (interviews, observations)
  2. Code and analyze data
  3. Develop preliminary concepts
  4. Collect more data guided by emerging theory (theoretical sampling)
  5. Refine concepts until theoretical saturation reached
Example Study:

Research Question: How do first-generation students navigate university?

Method: Iterative interviews and observations with 25 students

Outcome: Theory of "cultural straddling" between home and campus cultures

Quantitative vs. Qualitative Non-Experimental Research

Quantitative: Large samples, statistical analysis, generalizability, breadth

Qualitative: Smaller samples, thematic analysis, transferability, depth

Neither is "better"—they serve different purposes and answer different types of questions.

Topic 4

Validity and Reliability

Validity and reliability are fundamental to research quality. Validity concerns whether you're measuring what you intend to measure and whether your conclusions are justified. Reliability concerns consistency—whether your measurements produce stable results. Understanding and addressing threats to validity and reliability is essential for credible research.

Internal Validity

Definition: The extent to which you can confidently conclude that changes in the dependent variable were caused by the independent variable, not other factors.

Key Question: Are the observed effects really due to what you think they're due to, or could something else explain them?

Threats to Internal Validity

1. History

Definition: External events occurring during the study that affect the outcome

Example: Testing stress management program, but major campus crisis occurs mid-study, affecting everyone's stress

Control: Use control group (both groups affected equally), shorter study duration

2. Maturation

Definition: Natural changes in participants over time

Example: Children's reading improves over school year due to development, not just intervention

Control: Use control group, shorter timeframe

3. Testing/Practice Effects

Definition: Taking a test affects performance on later tests

Example: Pretest familiarizes students with test format, improving posttest scores regardless of intervention

Control: Use control group with same testing schedule, or use posttest-only design

4. Instrumentation

Definition: Changes in measurement tools or procedures over time

Example: Observers become more skilled at coding behavior, scoring more behaviors as time passes

Control: Calibrate instruments, train observers thoroughly, assess inter-rater reliability

5. Statistical Regression

Definition: Extreme scores tend to move toward the mean on retest

Example: Selecting students with lowest test scores for intervention—scores likely to improve simply due to regression

Control: Use control group selected by same criteria, random assignment

6. Selection Bias

Definition: Groups differ at baseline in ways that affect outcomes

Example: Comparing volunteers (motivated) to non-volunteers in effectiveness study

Control: Random assignment, matching, statistical control

7. Attrition/Mortality

Definition: Participants drop out, making groups non-equivalent

Example: Most struggling students drop from challenging program, leaving only high achievers

Control: Maximize retention, analyze dropouts, intent-to-treat analysis

8. Diffusion/Contamination

Definition: Treatment group influences control group

Example: Students in treatment group share new study techniques with control group friends

Control: Separate groups physically, timing, or location

External Validity

Definition: The extent to which findings generalize to other populations, settings, times, and conditions.

Population Validity

Can findings generalize to other groups?

Threat: College student samples may not represent general population

Solution: Use diverse samples, replicate with different populations

Ecological Validity

Do findings apply to real-world settings?

Threat: Laboratory conditions differ from natural environments

Solution: Field studies, realistic tasks, natural settings

Temporal Validity

Do findings hold across time?

Threat: Social phenomena change over time

Solution: Replicate studies across time periods

The Internal-External Validity Trade-off

Highly controlled laboratory experiments maximize internal validity but may sacrifice external validity. Field studies in natural settings enhance external validity but make internal validity harder to establish. Researchers must balance these competing demands based on their research goals.

Construct Validity

Definition: The extent to which your operational definitions and measurements accurately represent the theoretical constructs you're studying.

Inadequate Operationalization

Your measure doesn't fully capture the construct

Example: Measuring intelligence with only math problems ignores verbal, spatial, and other intelligences

Mono-Operation Bias

Using only one measure of a complex construct

Example: Assessing "teaching effectiveness" with only student ratings, ignoring learning outcomes, observations

Hypothesis Guessing

Participants figure out study purpose and alter behavior

Example: Knowing they're in "leadership training" study, participants act more like leaders

Evaluation Apprehension

Participants concerned about being judged

Example: Answering questionnaires in socially desirable way rather than honestly

Reliability

Definition: The consistency and stability of measurements. A reliable measure produces similar results under consistent conditions.

Test-Retest Reliability

Consistency across time—same measure given twice produces similar results

Assessment: Correlate scores from Time 1 and Time 2

Good reliability: r > .70

Example: IQ test given one week apart should yield similar scores

Internal Consistency

Items within a scale measure the same construct

Assessment: Cronbach's alpha (α)

Good reliability: α > .70 (> .80 preferred)

Example: All items on depression scale should correlate with each other

Inter-Rater Reliability

Agreement between different raters/observers

Assessment: Cohen's kappa, intraclass correlation

Good reliability: κ > .75, ICC > .75

Example: Two observers coding classroom behavior should agree on most observations

Parallel Forms Reliability

Consistency across equivalent versions of measure

Assessment: Correlate scores from two equivalent forms

Good reliability: r > .70

Example: Form A and Form B of standardized test should yield similar scores

Relationship Between Reliability and Validity

Reliable but Not Valid

Consistently measuring the wrong thing

Example: Bathroom scale consistently reads 5 lbs too high—reliable but not valid

Valid but Not Reliable

Measuring right thing but inconsistently (RARE)

If a measure isn't reliable, it usually can't be valid

Both Reliable and Valid

Consistently measuring the right thing

GOAL: This is what we strive for!

Key Principle

Reliability is necessary but not sufficient for validity. A measure must be reliable to be valid, but reliability alone doesn't guarantee validity.

Maximizing Validity and Reliability

  • Use established measures: Validated scales with known psychometric properties
  • Pilot test: Try measures with small sample before main study
  • Train data collectors: Ensure consistent procedures
  • Standardize conditions: Keep everything consistent except what you're manipulating
  • Random assignment: Gold standard for internal validity
  • Multiple measures: Assess constructs in multiple ways
  • Report thoroughly: Document all procedures for replication
Topic 5

Choosing Your Design

Selecting the right research design requires careful consideration of your research question, available resources, ethical constraints, and practical limitations. This topic provides a systematic framework for making design decisions and recognizing when compromises are necessary.

The Design Decision Framework

1

Start with Your Research Question

The type of question determines appropriate designs:

"Does X cause Y?"
Experimental or Quasi-Experimental
"Is X related to Y?"
Correlational/Survey
"What is the prevalence of X?"
Cross-Sectional Survey
"How does X change over time?"
Longitudinal
"What is the experience of X?"
Phenomenological/Qualitative
"How do people in context X behave?"
Ethnographic
2

Assess Feasibility Constraints

Consider practical limitations:

Budget
  • Do you need to pay participants?
  • Cost of materials/equipment?
  • Lab space or software fees?
  • Travel costs?
Time
  • How long until deadline?
  • Time needed for data collection?
  • Longitudinal designs feasible?
  • Time for participant recruitment?
Access
  • Can you reach target population?
  • Do you need gatekeepers' permission?
  • Special facilities required?
  • Specialized equipment available?
Expertise
  • Do you have needed skills?
  • Statistical knowledge sufficient?
  • Qualitative coding experience?
  • Can you get training/support?
3

Consider Ethical Issues

Some designs may be unethical:

Can you randomly assign?

May be unethical to withhold beneficial treatment or assign to harmful condition

Alternative: Wait-list control, quasi-experimental design

Can you manipulate the variable?

Cannot ethically manipulate some variables (e.g., expose to trauma, induce illness)

Alternative: Non-experimental correlational design

Vulnerable populations?

Children, prisoners, cognitively impaired require extra protections

Requirements: Additional consent, minimize risk, ensure benefits

Deception necessary?

Some studies require deception but must justify and debrief

Requirements: No alternatives, minimal risk, thorough debriefing

4

Evaluate Trade-offs

No design is perfect—understand what you're gaining and losing:

Design Choice Gain Lose
Laboratory experiment Internal validity, control Ecological validity, realism
Field experiment Ecological validity, realism Control, internal validity
Large survey Generalizability, breadth Depth, rich detail
Qualitative study Depth, context, meaning Generalizability, statistical power
Longitudinal Track change, stronger inference Time, cost, attrition
Cross-sectional Quick, efficient, affordable Cannot track change
5

Select Your Design

Make informed choice balancing all factors:

Prioritize:
  1. Answer your question: Design must be capable of addressing your research question
  2. Meet ethical standards: Must receive IRB approval
  3. Be feasible: Must be doable with your resources
  4. Maximize validity: Choose strongest design given constraints

Common Design Dilemmas and Solutions

Dilemma: Want to establish causation but can't randomly assign

Options:

  • Use quasi-experimental design with strong controls
  • Include pre-test to assess baseline equivalence
  • Use statistical matching or propensity scores
  • Acknowledge limitation and use careful language about causation

Dilemma: Need large sample but have limited resources

Options:

  • Use online data collection platforms (MTurk, Prolific)
  • Partner with other researchers to pool samples
  • Focus on effect sizes rather than just significance
  • Do power analysis to determine minimum needed sample
  • Consider within-subjects design (needs fewer participants)

Dilemma: Topic needs depth but also generalizability

Options:

  • Use mixed methods (qualitative + quantitative)
  • Sequential design: qualitative to develop measures, then quantitative survey
  • Do pilot qualitative study, then main quantitative study
  • Accept limitations and plan future studies to complement

Dilemma: Ideal design too time-consuming for timeline

Options:

  • Use cross-sectional design instead of longitudinal
  • Shorten intervention duration if justifiable
  • Use existing data (secondary analysis)
  • Conduct smaller pilot study now, larger study later

Mixed Methods Designs

Combining quantitative and qualitative approaches can provide more complete understanding:

Sequential Explanatory

QUAN qual

Quantitative data collected first, then qualitative to explain findings

Example: Survey shows unexpected correlation; interviews explore why

Sequential Exploratory

QUAL quan

Qualitative data collected first, then quantitative to test emerging themes

Example: Interviews identify key factors; survey tests prevalence in larger sample

Convergent Parallel

QUAN + QUAL

Both types collected simultaneously, then compared

Example: Survey data and interview data both address same questions; results triangulated

Embedded

QUAN(qual)

One approach embedded within larger design of other type

Example: RCT with post-intervention focus groups to understand participants' experiences

Document Your Decision Process

In your research proposal and final report, explain WHY you chose your design. Discuss:

  • How it aligns with your research question
  • Alternative designs you considered
  • Trade-offs you accepted
  • How you addressed limitations

This demonstrates thoughtful, intentional research planning.

Perfect is the Enemy of Good

The "perfect" design often doesn't exist given real-world constraints. Don't let the pursuit of perfection prevent you from conducting rigorous, valuable research. Choose the strongest design feasible for your circumstances, acknowledge its limitations honestly, and conduct your research with care and integrity.

Summary

Module 04 Key Takeaways

What You've Learned

  • Research design is the blueprint determining what conclusions you can draw from your study
  • True experiments require manipulation, control, and random assignment to establish causation
  • Non-experimental designs are essential when manipulation isn't possible or ethical
  • Validity (measuring correctly) and reliability (measuring consistently) are fundamental to quality research
  • Design choices involve trade-offs between internal validity, external validity, and practical feasibility

Next Steps

In Module 05: Sampling Methods, you'll learn how to select participants for your research. Discover probability and non-probability sampling techniques, determine appropriate sample sizes, and understand how sampling decisions affect the generalizability of your findings.

Continue to Module 05
Practice

Design Selection Exercises

Practical Design Challenges

  1. Design Matching: For each research question below, identify the most appropriate research design and justify your choice:
    • Does exercise improve mood in depressed patients?
    • How do teachers experience curriculum changes?
    • What percentage of students use mental health services?
    • Is social media use correlated with loneliness?
  2. Threat Identification: Read a published study and identify potential threats to internal and external validity. How did the researchers address them (or not)?
  3. Design Your Study: Take your research question from Module 02 and:
    • Select an appropriate design
    • Justify why this design fits your question
    • Identify limitations and how you'll address them
    • Create a visual diagram of your design
  4. Validity Assessment: Evaluate the internal validity, external validity, construct validity, and reliability of a measurement instrument used in your field.
  5. Trade-off Analysis: Compare experimental and field study approaches for your topic. List pros and cons of each and decide which better serves your goals.