- Review literature in the domain which you wish to measure (i.e., “computer attitudes”).
- Develop a list of categories (subscales) that you wish to sample from the domain. The domain may be “Computer Attitudes” and the categories may be “ease of use of computers” and “usefulness in education”.
- Write 8 to 10 items/statements (operational definitions) for each category (i.e., “Computers will help students learn material faster.”). Avoid common survey pitfalls when writing your statements.
- Give the items to at least 5 experts for classification (Content Validity). The panel of experts will attempt to match the operational definitions with their appropriate categories within the domain.
- Develop an instrument with the successfully classified items. Use a Likert scale to design your instrument. You may wish to rewrite some of the items that were not successfully classified.
- Field test the instrument (6 to 10 people per item on the instrument) with the populations for which the instrument is being developed.
- Run a factor analysis (exploratory) on the field test responses. More advanced students may wish to do a confirmatory factory analysis.
- Name each factor (category) based on the items which loaded on it (>.40)
- Review whether each item conceptually belongs with its factor (subscale) and remove those which do not.
- Run Cronbach’s Alpha Reliability for each factor/category (subscale) to investigate internal consistency reliability.
- Modify/drop and retest the instrument if necessary (alpha<.70).
by Del Siegle, Ph.D. at http://www.gifted.uconn.edu/siegle/research/Instrument%20Reliability%20and%20Validity/instdeve.html (accessed January 7, 2010)