Topic outline

  • General

  • Unit 1: Collecting Data in Evaluation and Research: Instrument Planning

    In this unit, you will look into assessment and how these concepts apply to the measurement of abstract constructs that cannot be measured directly. It emphasizes on the importance of a well-defined and conceptualized construct as the first step in developing accurate and reliable instruments. You will also look into the different formats of instruments at varying levels of measurement. The unit ends with the conceptualization of the construct you wish to measure and its assessment context.

  • Unit 2: Instrument Design and Development

    In this unit, you will develop your items through a literature review and use any of the available scaling methods to develop a pool of items that will measure your construct.  At this stage, you are expected to arrive at key decisions regarding specific characteristics of your instrument, from the format, medium, analysis, and data collection processes. Ethical considerations should also be made. The unit ends with a pool of items ready for validation. 

    Key concepts: scaling, instrument format, privacy, confidentiality, informed consent, access to data 

  • Unit 3: Instrument Quality

    This unit represents the final stage in instrument development, which is the development and evaluation or validation of the instrument. This stage provides a more empirical approach to operationalizing the construct you are measuring. You will create a pilot study plan for your instrument to establish its psychometric properties. You will also be introduced to the two main theories of measurement and how validity and reliability are estimated using these theories. The culmination of the unit and the course is implementing content and construct validation of your instrument. 

    Key concepts: psychometric properties, validity, reliability, classical test theory, item response theory, content validity, face validity, convergent validity, discriminant validity, concurrent validity, predictive validity, stability, equivalence, internal consistency, inter-rater reliability