NCIDQ Exam Development

Like all credentialing and licensure exams, the NCIDQ Exam must be valid, reliable and fair. CIDQ follows the same standards for developing, administering and scoring the exam that virtually all other exams of similar professional importance do. We follow the guidelines published in The Standards for Educational and Psychological Tests (published jointly by the American Psychological Association, the National Council on Measurement in Education and the American Educational Research Association). These standards spell out policies that CIDQ follows to ensure that we administer a test that is valid, fair and reliable.

To develop, administer and score the NCIDQ Exam, we work with a professional testing company that specializes in the development of certification and licensure exams. We follow accepted procedures for developing such exams and carefully document each step in the test-development process.

  1. Practice Analysis. CIDQ updated its practice analysis in 2014 to identify current knowledge and skills that define a minimally competent professional in interior design. During the practice analysis process, a panel of subject-matter experts defines the overall practice areas and distinct tasks, knowledge and skills required for competent performance. We then validate the job responsibilities developed by the experts through a survey of 720 practicing interior designers who review and rate the areas and tasks according to their level of importance and criticality to competent practice.
  2. Developing a Test Blueprint. CIDQ uses the results from the practice analysis to develop a “blueprint,” or template, for the NCIDQ Exam. Information from the analysis is used to determine the importance of each area of practice or task, and is then translated into the percentage of questions allotted a given area in the multiple-choice sections of the exam. The analysis also determines how the scoring formula is weighted in the design part of the exam. All practicum problems are developed according to appropriate strategies and standards for the design and scoring of this type of exam, which are developed by independent organizations concerned with the quality of certification examinations. (American Educational Research Association, American Psychological Association and the National Council on Measurement in Education, 1999; National Commission for Certifying Agencies, 1995).
  3. Item Development and Validation. The question-writing process uses both interior design and testing experts to ensure the appropriate content and form. All exam questions are written and reviewed by volunteer subject-matter experts in interior design who are trained in developing questions and problems for the exam. During and after development these experts review all of the questions to ensure they adhere to accepted principles for multiple-choice questions and design problems, as well as grammatical and usage conventions. Questions accepted for use in the exam are judged, by consensus of the volunteers, to meet the following standards:
    • The knowledge tested by the question is essential for competent interior design and the protection of public health, safety and welfare.
    • The knowledge tested by the question is either moderately or extremely important in assessing competent interior design.
    • A correct response to the question would moderately or clearly differentiate adequate from inadequate performance for the competent interior designer.
    • The question has a verified reference.
    • The question is appropriate for the competent interior designer.
    • The answer or solution is correct.
    • The answer or solution can be defended if necessary.
    • The other answer choices are incorrect, but plausible to the unknowledgeable. 
  4. Pretesting Test Questions. To verify the validity and quality of exam questions, CIDQ pretests new questions and problems before including them as scored items on the exam. (That's why some of the questions you complete in the multiple-choice exam sections are considered “experimental.”) The pretest results are analyzed statistically to ensure the quality and reliability of the overall exam. Even though the test questions change, we use a statistical procedure to ensure that one test is no harder or easier than another test.
  5. Examination Assembly. Each part of the NCIDQ Exam is created by selecting the appropriate number of questions from each content area, as specified in the test blueprint. A panel of interior designers works with testing experts during test assembly and validation to ensure maximum quality (in accordance with the considerations listed above) and an appropriate mixture of content.
  6. Examination Review and Revision. The draft exams are again reviewed by interior designers for technical accuracy and by testing experts to ensure integrity.
  7. Passing Point. An exam that is used for registration or licensure must have a defensible, criterion-referenced passing score. The score that separates candidates who pass from candidates who fail must be based on the minimum competence required to protect the public from harm. CIDQ works with our testing consultant to determine this passing point, which we define as the level of an interior designer's ability to practice independently (without supervision) in a manner that will protect the public health, safety and welfare.
  8. Test Administration. Test administration procedures for the NCIDQ Exam ensure consistent, comfortable testing conditions for all candidates. Test administration guidelines specify a process for admitting candidates into the exam room, verifying their identity, using seating charts, displaying information signs, providing security, allotting testing times and other important considerations. Testing facilities meet CIDQ guidelines for security, proper room size, ventilation, restroom facilities, accessibility and noise control. Test administration personnel are thoroughly trained in the requirements for administering the test.
  9. Psychometric Analysis. Following each exam administration, CIDQ’s consultant conducts systematic analysis studies to ensure the proper function of each question and problem and of the test as a whole. Psychometric analysis also includes extensive reliability analysis as well as other studies that evaluate the quality of the exam.