Theoretical background
As its name indicates, the Intelligence Structure Battery – Short Form is a simplified version of the Intelligence Structure Battery (INSBAT). Like INSBAT, it is based on the hierarchical intelligence model of Cattell-Horn-Carroll (Carroll, 1993, 2003; Horn, 1989; Horn & Noll, 1997), which assumes that there are broad-based secondary factors that underlie the correlations between the individual primary factors or subtests. The correlations between the secondary factors are in turn explained by a general factor of intelligence, which forms the peak or tip of the hierarchical intelligence model. The validity of this factor structure has been replicated in many studies from different countries (e.g. Arendasy, Hergovich & Sommer, 2008, Brickley, Keith & Wolfe, 1995; Carroll, 1989, 2003; Gustafsson, 1984; Undheim & Gustafsson, 1987).
For the Intelligence Structure Battery – Short Form the following secondary factors were selected: fluid intelligence, crystallised intelligence, quantitative reasoning, visual processing and long-term memory. With the exception of visual processing and long-term memory, each of the selected secondary factors is measured by two subtests - the subtest with the highest loading onto the factor in question and an additional subtest that helps to depict the breadth of content of the secondary factory.
The eight subtests of the Intelligence Structure Battery – Short Form were created using a variety of approaches to automatic item generation (AIG: Arendasy & Sommer, in press; Irvine & Kyllonen, 2002), taking account of recent research findings in the cognitive sciences and applied psychometrics.
Administration
Unlike in INSBAT, the user of the Intelligence Structure Battery – Short Form can only omit entire secondary factors; it is not possible to omit individual subtests or to adjust their reliability to specific assessment needs. Each subtest is provided with standardised instructions and practice examples based on the principles of programmed instruction and “mastery learning”. Depending on the subtest, the respondent’s answers are given either in multiple-choice format or as automated free responses. The items in the individual subtests are presented partly in power test form and partly with a time limit on each item. In seven of the eight subtests the items are presented as an adaptive test (CAT) with a starting item selected on the basis of sociodemographic data, thereby maximising reliability and test security.
Scoring
For each subtest the ability parameter is first calculated according to the 1PL Rasch model. However, since the reliability of the individual subtests is deliberately set low as standard, these test scores are not reported. They merely form the starting point for calculation of the real factors of interest – the secondary factors, which can be used to assess both intelligence structure and level. Alongside the reporting of the factor scores a norm comparison (percentile ranks and IQ; confidence interval) is carried out. At the conclusion of testing the results are displayed both in tabular form and as a profile, and these can be printed out. In addition, INSSV has provision for transferring the test results automatically into a report template.
Reliability
The reliability of the five secondary factors lies between 0.70 and 0.84. The reliability of the general factor is 0.91.
Validity
The construct representation (Embretson, 1983) of the individual subtests has been demonstrated in studies in which the item difficulties were predicted from task characteristics derived from the theoretical models for the solving of these types of task. The multiple correlations between the item difficulty parameters of the Rasch model (Rasch, 1980) and the item characteristics thus obtained vary for the individual subtests between R=0.70 and R=0.97. This means that between 50% and 94% of the difference in the difficulties of the individual items can be explained by the theoretical models on which construction of the items in the individual subtests is based.
Many other studies of construct validity are now available that confirm the theory-led assignment of the individual subtests to the secondary factors of the Cattell-Horn-Carroll model (Arendasy & Sommer, 2007; Arendasy, Hergovich & Sommer, 2008; Sommer & Arendasy, 2005; Sommer, Arendasy & Häusler, 2005).
Evidence of criterion validity has come from the fields of aviation psychology (selection of trainee pilots) and educational counselling (prediction of student success at universities of applied sciences).