# Computerized Adaptive Test-related Functions – catsim.cat¶

Functions used specifically during the application/simulation of computerized adaptive tests.

catsim.cat.bias(actual: list, predicted: list)[source]

Calculates the test bias, an evaluation criterion for computerized adaptive test methodolgies [Chang2001]. The value is calculated by:

$Bias = \frac{\sum_{i=1}^{N} (\hat{\theta}_i - \theta_{i})}{N}$

where $$\hat{\theta}_i$$ is examinee $$i$$ estimated proficiency and $$\theta_i$$ is examinee $$i$$ actual proficiency.

Parameters: actual (list) – a list or 1-D numpy array containing the true proficiency values predicted (list) – a list or 1-D numpy array containing the estimated proficiency values the bias between the predicted values and actual values.
catsim.cat.dodd(theta: float, items: numpy.ndarray, correct: bool) → float[source]

Method proposed by [Dod90] for the reestimation of $$\hat{\theta}$$ when the response vector is composed entirely of 1s or 0s.

$\begin{split}\hat{\theta}_{t+1} = \left\lbrace \begin{array}{ll} \hat{\theta}_t+\frac{b_{max}-\hat{\theta_t}}{2} & \text{if } X_t = 1 \\ \hat{\theta}_t-\frac{\hat{\theta}_t-b_{min}}{2} & \text{if } X_t = 0 \end{array} \right\rbrace\end{split}$
Return type: float theta (float) – the initial proficiency level items (ndarray) – a numpy array containing the parameters of the items in the database. This is necessary to capture the maximum and minimum difficulty levels necessary for the method. correct (bool) – a boolean value informing whether or not the examinee correctly answered the current item. a new estimation for $$\theta$$
catsim.cat.generate_item_bank(n: int, itemtype: str = '4PL', corr: float = 0)[source]

Generate a synthetic item bank whose parameters approximately follow real-world parameters, as proposed by [Bar10].

Item parameters are extracted from the following probability distributions:

• discrimination: $$N(1.2, 0.25)$$
• difficulty: $$N(0, 1)$$
• pseudo-guessing: $$N(0.25, 0.02)$$
• upper asymptote: $$U(0.94, 1)$$
Parameters: n (int) – how many items are to be generated itemtype (str) – either 1PL, 2PL, 3PL or 4PL for the one-, two-, three- or four-parameter logistic model corr (float) – the correlation between item discrimination and difficulty. If itemtype == '1PL', it is ignored. an n x 4 numerical matrix containing item parameters numpy.ndarray
>>> generate_item_bank(5, '1PL')
>>> generate_item_bank(5, '2PL')
>>> generate_item_bank(5, '3PL')
>>> generate_item_bank(5, '4PL')
>>> generate_item_bank(5, '4PL', corr=0)

catsim.cat.mse(actual: list, predicted: list)[source]

Mean squared error, a value used when measuring the precision with which a computerized adaptive test estimates examinees proficiencies [Chang2001]. The value is calculated by:

$MSE = \frac{\sum_{i=1}^{N} (\hat{\theta}_i - \theta_{i})^2}{N}$

where $$\hat{\theta}_i$$ is examinee $$i$$ estimated proficiency and $$\hat{\theta}_i$$ is examinee $$i$$ actual proficiency.

Parameters: actual (list) – a list or 1-D numpy array containing the true proficiency values predicted (list) – a list or 1-D numpy array containing the estimated proficiency values the mean squared error between the predicted values and actual values.
catsim.cat.overlap_rate(usages: numpy.ndarray, test_size: int) → float[source]

Test overlap rate, an average measure of how much of the test two examinees take is equal [Bar10]. It is given by:

$T=\frac{N}{Q}S_{r}^2 + \frac{Q}{N}$

If, for example $$T = 0.5$$, it means that the tests of two random examinees have 50% of equal items.

Return type: float usages (ndarray) – a list or numpy.ndarray containing the number of times each item was used in the tests. test_size (int) – an integer informing the number of items in a test. test overlap rate.
catsim.cat.rmse(actual: list, predicted: list)[source]

Root mean squared error, a common value used when measuring the precision with which a computerized adaptive test estimates examinees proficiencies [Bar10]. The value is calculated by:

$RMSE = \sqrt{\frac{\sum_{i=1}^{N} (\hat{\theta}_i - \theta_{i})^2}{N}}$

where $$\hat{\theta}_i$$ is examinee $$i$$ estimated proficiency and $$\hat{\theta}_i$$ is examinee $$i$$ actual proficiency.

Parameters: actual (list) – a list or 1-D numpy array containing the true proficiency values predicted (list) – a list or 1-D numpy array containing the estimated proficiency values the root mean squared error between the predicted values and actual values.