Want to create interactive content? It’s easy in Genially!

Get started free

Theory General Feedback

Aristides Ferreira

Created on May 9, 2023

Start designing with a free template

Discover more than 1500 professional designs like these:

Piñata Challenge

Teaching Challenge: Transform Your Classroom

Frayer Model

Math Calculations

Interactive QR Code Generator

Interactive Scoreboard

Interactive Bingo

Transcript

Some Reflections / Motivations…

In the next slides you will find some theoretical explanations about validity and reliability.

Reliability

Test variance

  • True (%???)
  • The construct
  • Error (%???)
  • Other constructs
  • Changes over time
  • Evaluators effect

Measures

Temporal Stability – the same results (for the same subjects) for different measuring moments

Internal Consistency – we should expect each item to be measuring the same variable

Inter-raters Agreement – no application standardization

Temporal Stability Test-retest

  • Is measured by administering the test to the same group of subjects on two occasions
  • The two sets of scores are then correlated (thus, r≥.70/.90, p≤.05) (Kline, 2000)
  • The variance in test scores is in part attributable to error variance (lower the correlation, higher the error)
  • However:
    Longitudinal
        Expensive
            Hard to find the same participants
                External variables (mood, motivation, anxiety...)
                      Memory effects

                      Internal consistencyCronbach alpha

                      • Multiple correlations between items (calculates de mean value of all possible split-half)
                      • Measures the error of content
                      +
                      • Numeric value ranging from 0 to 1, being acceptable values ≥ .70 (Nunnaly, 1978; Kline, 2000)

                      Validity

                      Meaning

                      “a test is said to be valid if it measures what it claims to measure” Kline (2000)

                      Concepts of Validity

                      • Measurement Validity
                      • Construct Validity
                      • Convergent Validity
                      • Discriminant Validity
                      • Content Validity
                      • Criterion Validity
                      • Predictive Validity
                      • Concurrent Validity

                      Content and Facial Validity

                      • The items content covers the observed behavior (content)
                      • The items content covers the theory behind the construct (content)
                      • Refers to the appearence of a test (facial)
                      • Appears to measure what it claims to measure (facial)

                      Content Validity Ratio (CVR)

                      Construct Validity

                      • Construct is a concept (e.g., work satisfaction, presenteeism, attitudes toward work, commitment...)
                      • Represents the meaning and the nature of measured concept
                      • Can be “translated” into real behaviors

                      How to measure construct validity

                      • Theory
                      • Correlations with other constructs that are measuring the same construct
                      • Factor analysis...
                      • ...exploratory
                      • ...confirmatory
                      • Experimental studies

                      Construct Validity (cont.)

                      • Avoid strong correlations between items (Vaz Serra, 1994)
                      • Convergent Validity (r ≥ .30)
                      • Discriminant (or Divergent) Validity (r ≤ .20)

                      Concurrent Validity

                      • Correlation of that test with other tests in one administration
                      • Requires a benchmarking measure
                      • A high correlation (let’s say between .3 and .5) would be a demonstration of concurrent validity

                      Predictive Validity

                      • A test is said to have predictive validity if is sufficient to predict some appropriate criterion (r ≥ .30)
                      • A typical example concerns intelligence tests (it is expected that they predict academic / work performance).
                      • A modest but positive correlation (> .20 / .30) would be acceptable as evidence of predictive validity.

                      Validity cautions

                      Non defined criterion

                      Non defined concepts

                      Insufficient theory

                      Heterogeneous samples