Is my test Valid and Reliable?

Get Started. It's Free
or sign up with your email address
Is my test Valid and Reliable? by Mind Map: Is my test Valid and Reliable?

1. Reliable?

1.1. Test -retest: Did it work more than once?

1.1.1. Once is lucky, but a test with repeated success is reliable. "A reliable if it consistently yields the same, or nearly the same, ranks over repeated administrations during which we would not expect the trait being measured to have changed" (Kubiszyn & Borich, 2010).

1.2. Alternate Forms or Equivalence: Although I've changed my appearance, am I still the same underneath?

1.2.1. If a test is going to benefit the students, reliability is key. A test can be considered reliable if it can have an alternate form and still work just as well. "If there are two equivalent forms of a test, these forms can be used to obtain an estimate of the reliability of the scores from the test" (Kubiszyn & Borich, 2010).

1.3. Internal Consistency: It's what's on the inside that counts.

1.3.1. Do all the questions inside the test connect? Are they "designed to measure a single basic concept?" If you were to split the test in two, would each student be measure on the same things? If not, then your test is not reliable. If so, then the reliability increases.

2. Valid?

2.1. Content Validity: Does it look good?

2.1.1. Predictive Validity Evidence: Does it prepare us for future units and lesson? Did it accurately predict the future? If I am using this test as a way to make judgments about my student's abilities in the future, it better be accurate by having a strong correlation between prediction and future actions. "High predictive validity evidence provides a strong argument for the worth of the test" (Kubiszyn & Borich, 2010).

2.2. It is vital for the questions on the test to match the instructional objectives. Is this assessment complementing the goals of the instruction? If not, what's the point of the assessment?

2.3. Concurrent Validity Evidence: He makes me look good!

2.3.1. Construct Validity Evidence: Is it built well? Is the test built well enough that it works? Did the test provide "any information that lets you know whether results from the test correspond to what you would expect (based on you knowledge of what is being measured)" (Kubiszyn & Borich, 2010)?

2.4. A new test needs credibility. What better way is there to gain credibility than piggy backing of an "established test group" (Kubiszyn & Borich, 2010)? However, in order for your test to be valid, it needs to hold it's own when the "correlation between the two sets of test scores" (Kubiszyn & Borich, 2010) is being determined.