The use of standardizable computing environments offers a promise for enhancing the reproducibility of neuroimaging computational workflows. In this hackathon project we seek to develop a reproducibility assessment framework for testing and validating similar workflows run under different conditions. For a given workflow, which can be run on a local computer system, as a virtual machine, or in a cloud computing environment, we want to assess the potential variances introduced in workflow results as a function of the operating environment. This framework can then be extended to assessment of variations due to additional factors, such as software versioning, workflow parameters, and input data. The end result of this project would be a stand alone application that details the reproducibility of workflow/result pair in the context of the various execution variables.
This project hopes to leverage the NITRC-CE, nipype, testkraut, and existing shared data/shared results.