Contributing

catsim is built in an object-oriented paradigm (or as object-oriented as Python allows) so it is rather simple to extend it. You can write new initializers, selectors, estimators or stopping criteria by extending the base classes that are present in each of the corresponding modules. You can also write new IRT-related functions ot CAT-related functions, as long as you have the right academic papers to prove they are relevant.

If you think the simulator could be doing something it is currently not doing, feel free to sudy it and make a contribution

If you know a better way to present all the data collected during simulation, fell free to contribute with your own plots.

Psychometrics

catsim still has a way to go before it can be considered a mature package. Here is a list of features that may be of interest:

  • Bayesian estimators (maybe using PyMC);
  • Test evaluation indexes;
  • Comparison between simulation results (for example [Barr2010]);
  • Other information functions, selection methods based on intervals or areas of information etc. (see [Lind2010]).

Software architecture

If you think you have a better idea on how classes should be arranged so as to make the package more comprehensive and extensible, your contribution is also accepted. But be aware that major changes in the software may affect other users and the package will only be released to the public if it is tested accordingly.

Unit testing

If you are interested in making a contribution to catsim, make sure it is contemplated in the package testing module. If new features require new unit tests, don’t hesitate in making them. If you think there is something already in the package that is worth testing, you are welcome to create a unit test for it.

How to contribute

Contributing code: create a fork on GitHub, make your changes on your own repo and then send a pull request to out testing branch so that we can check your contributions. Make sure your version passes on the unit tests.

Contributing ideas: file an issue on GitHub, label it as an enhancement and describe as thoroughly as possible what could be done.

Blaming us: file an issue on GitHub describing as thoroughly as possible the problem, with error messages, descriptions of your tests and possibly suggestions for fixing it.

[Lind2010]Linden, W. J. V. D., & Pashley, J. P. Item Selection and Ability Estimation in Adaptive Testing. _In_ Linden, W. J. V. D., & Glas, C. A. W. (2010). Elements of Adaptive Testing. New York, NY, USA: Springer New York.
[Barr2010]Barrada, J. R., Olea, J., Ponsoda, V., & Abad, F. J. (2010). A Method for the Comparison of Item Selection Rules in Computerized Adaptive Testing. Applied Psychological Measurement, 34(6), 438–452. http://doi.org/10.1177/0146621610370152