Processing pipeline details¶
fMRIPrep adapts its pipeline depending on what data and metadata are
available and are used as the input.
For example, slice timing correction will be
performed only if the
SliceTiming metadata field is found for the input
A (very) high-level view of the simplest pipeline (for a single-band dataset with only one task, single-run, with no slice-timing information nor fieldmap acquisitions) is presented below:
Preprocessing of structural MRI¶
The anatomical sub-workflow begins by constructing an average image by conforming all found T1w images to RAS orientation and a common voxel size, and, in the case of multiple images, averages them into a single reference template (see Longitudinal processing).
Occasionally, openly shared datasets may contain preprocessed anatomical images
as if they are unprocessed.
In the case of brain-extracted (skull-stripped) T1w images, attempting to perform
brain extraction again will often have poor results and may cause fMRIPrep to crash.
fMRIPrep can attempt to detect these cases using a heuristic to check if the
T1w image is already masked.
This must be explicitly requested with
If this heuristic fails, and you know your images are skull-stripped, you can skip brain
Likewise, if you know your images are not skull-stripped and the heuristic incorrectly
determines that they are, you can force skull stripping with
which is the current default behavior.
See also sMRIPrep’s
Brain extraction, brain tissue segmentation and spatial normalization¶
Then, the T1w reference is skull-stripped using a Nipype implementation of
antsBrainExtraction.sh tool (ANTs), which is an atlas-based
brain extraction workflow:
An example of brain extraction is shown below:
Once the brain mask is computed, FSL
fast is utilized for brain tissue segmentation.
Finally, spatial normalization to standard spaces is performed using ANTs’
in a multiscale, mutual-information based, nonlinear registration scheme.
See Defining standard and nonstandard spaces where data will be resampled for information about how standard and nonstandard spaces can
be set to resample the preprocessed data onto the final output spaces.
Cost function masking during spatial normalization¶
When processing images from patients with focal brain lesions (e.g., stroke, tumor
resection), it is possible to provide a lesion mask to be used during spatial
normalization to standard space [Brett2001].
ANTs will use this mask to minimize warping of healthy tissue into damaged
areas (or vice-versa).
Lesion masks should be binary NIfTI images (damaged areas = 1, everywhere else = 0)
in the same space and resolution as the T1 image, and follow the naming convention specified in
BIDS Extension Proposal 3: Common Derivatives
This file should be placed in the
sub-*/anat directory of the BIDS dataset
to be run through fMRIPrep.
Because lesion masks are not currently part of the BIDS specification, it is also necessary to
.bidsignore file in the root of your dataset directory. This will prevent
bids-validator from complaining
that your dataset is not valid BIDS, which prevents fMRIPrep from running.
.bidsignore file should include the following line:
In the case of multiple T1w images (across sessions and/or runs), T1w images are
merged into a single template image using FreeSurfer’s mri_robust_template.
This template may be unbiased, or equidistant from all source images, or
aligned to the first image (determined lexicographically by session label).
For two images, the additional cost of estimating an unbiased template is
trivial and is the default behavior, but, for greater than two images, the cost
can be a slowdown of an order of magnitude.
Therefore, in the case of three or more images, fMRIPrep constructs
templates aligned to the first image, unless passed the
flag, which forces the estimation of an unbiased template.
The preprocessed T1w image defines the
In the case of multiple T1w images, this space may not be precisely aligned
with any of the original images.
Reconstructed surfaces and functional datasets will be registered to the
T1w space, and not to the input images.
fMRIPrep uses FreeSurfer to reconstruct surfaces from T1w/T2w
If enabled, several steps in the fMRIPrep pipeline are added or replaced.
All surface preprocessing may be disabled with the
Surface processing will be skipped if the outputs already exist.
In order to bypass reconstruction in fMRIPrep, place existing reconstructed
<output dir>/freesurfer prior to the run, or specify an external
subjects directory with the
fMRIPrep will perform any missing
recon-all steps, but will not perform
any steps whose outputs already exist.
If FreeSurfer reconstruction is performed, the reconstructed subject is placed in
<output dir>/freesurfer/sub-<subject_label>/ (see FreeSurfer derivatives).
Surface reconstruction is performed in three phases.
The first phase initializes the subject with T1w and T2w (if available)
structural images and performs basic reconstruction (
autorecon1) with the
exception of skull-stripping.
Skull-stripping is skipped since the brain mask calculated previously is injected into the appropriate location for FreeSurfer.
For example, a subject with only one session with T1w and T2w images
would be processed by the following command:
$ recon-all -sd <output dir>/freesurfer -subjid sub-<subject_label> \ -i <bids-root>/sub-<subject_label>/anat/sub-<subject_label>_T1w.nii.gz \ -T2 <bids-root>/sub-<subject_label>/anat/sub-<subject_label>_T2w.nii.gz \ -autorecon1 \ -noskullstrip
The second phase imports the brainmask calculated in the
Preprocessing of structural MRI sub-workflow.
The final phase resumes reconstruction, using the T2w image to assist
in finding the pial surface, if available.
Reconstructed white and pial surfaces are included in the report.
If T1w voxel sizes are less than 1mm in all dimensions (rounding to nearest
.1mm), submillimeter reconstruction is used, unless disabled with
If T2w or FLAIR images are available, and you do not want them included in
FreeSurfer reconstruction, use
--ignore t2w or
rh.midthickness surfaces are created in the subject
surf/ directory, corresponding to the surface half-way between the gray/white
boundary and the pial surface.
inflated surfaces are also
converted to GIFTI format and adjusted to be compatible with multiple software
packages, including FreeSurfer and the Connectome Workbench.
GIFTI surface outputs are aligned to the FreeSurfer T1.mgz image, which may differ from the T1w space in some cases, to maintain compatibility with the FreeSurfer directory. Any measures sampled to the surface take into account any difference in these images.
See also sMRIPrep’s
Refinement of the brain mask¶
Typically, the original brain mask calculated with
will contain some innaccuracies including small amounts of MR signal from
outside the brain.
Based on the tissue segmentation of FreeSurfer (located in
and only when the Surface Processing step has been
executed, fMRIPrep replaces the brain mask with a refined one that derives
aseg.mgz file as described in
Preprocessing of BOLD files is split into multiple sub-workflows described below.
BOLD reference image estimation¶
This workflow estimates a reference image for a BOLD series. If a single-band reference (“sbref”) image associated with the BOLD series is available, then it is used directly. If not, a reference image is estimated from the BOLD series as follows: When T1-saturation effects (“dummy scans” or non-steady state volumes) are detected, they are averaged and used as reference due to their superior tissue contrast. Otherwise, a median of motion corrected subset of volumes is used.
The reference image is then used to calculate a brain mask for the
BOLD signal using NiWorkflows’
Further, the reference is fed to the head-motion estimation
workflow and the registration workflow to map
BOLD series into the T1w image of the same subject.
Using the previously estimated reference scan,
mcflirt is used to estimate head-motion.
As a result, one rigid-body transform with respect to
the reference image is written for each BOLD
Additionally, a list of 6-parameters (three rotations,
three translations) per time-step is written and fed to the
For a more accurate estimation of head-motion, we calculate its parameters
before any time-domain filtering (i.e., slice-timing correction),
as recommended in [Power2017].
Slice time correction¶
SliceTiming field is available within the input dataset metadata,
this workflow performs slice time correction prior to other signal resampling
Slice time correction is performed using AFNI
All slices are realigned in time to the middle of each TR.
Slice time correction can be disabled with the
command line argument.
If a BOLD series has fewer than
5 usable (steady-state) volumes, slice time correction will be disabled
for that run.
Susceptibility Distortion Correction (SDC)¶
One of the major problems that affects EPI data is the spatial distortion caused by the inhomogeneity of the field inside the scanner. Please refer to Susceptibility Distortion Correction (SDC) for details on the available workflows.
See also SDCFlows’
Pre-processed BOLD in native space¶
A new preproc BOLD series is generated from the slice-timing corrected or the original data (if STC was not applied) in the original space. All volumes in the BOLD series are resampled in their native space by concatenating the mappings found in previous correction workflows (HMC and SDC if excecuted) for a one-shot interpolation process. Interpolation uses a Lanczos kernel.
EPI to T1w registration¶
The alignment between the reference EPI image
of each run and the reconstructed subject using the gray/white matter boundary
?h.white surfaces) is calculated by the
If FreeSurfer processing is disabled, FSL
flirt is run with the
BBR cost function, using the
fast segmentation to establish the gray/white matter boundary.
After BBR is run, the resulting affine transform will be compared to the initial transform found by FLIRT.
Excessive deviation will result in rejecting the BBR refinement and accepting the original, affine registration.
Resampling BOLD runs onto standard spaces¶
This sub-workflow concatenates the transforms calculated upstream (see
Head-motion estimation, Susceptibility Distortion Correction (SDC) –if
fieldmaps are available–, EPI to T1w registration, and an anatomical-to-standard
transform from Preprocessing of structural MRI) to map the
image to the standard spaces given by the
(see Defining standard and nonstandard spaces where data will be resampled).
It also maps the T1w-based mask to each of those standard spaces.
Transforms are concatenated and applied all at once, with one interpolation (Lanczos) step, so as little information is lost as possible.
The output space grid can be specified using modifiers to the
EPI sampled to FreeSurfer surfaces¶
If FreeSurfer processing is enabled, the motion-corrected functional series (after single shot resampling to T1w space) is sampled to the surface by averaging across the cortical ribbon. Specifically, at each vertex, the segment normal to the white-matter surface, extending to the pial surface, is sampled at 6 intervals and averaged.
Surfaces are generated for the “subject native” surface, as well as transformed to the
fsaverage template space.
All surface outputs are in GIFTI format.
If CIFTI output is enabled, the motion-corrected functional timeseries (in T1w space) is first
sampled to the high resolution 164k vertex (per hemisphere)
fsaverage. Following that,
the resampled timeseries is sampled to HCP Pipelines_’s
fsLR mesh (with the left and
right hemisphere aligned) using Connectome Workbench’s
-metric-resample to generate a
surface timeseries for each hemisphere. These surfaces are then combined with corresponding
volumetric timeseries to create a CIFTI2 file.
Given a motion-corrected fMRI, a brain mask,
mcflirt movement parameters and a
segmentation, the discover_wf sub-workflow calculates potential
confounds per volume.
Calculated confounds include the mean global signal, mean tissue class signal,
tCompCor, aCompCor, Frame-wise Displacement, 6 motion parameters, DVARS, spike regressors,
and, if the
--use-aroma flag is enabled, the noise components identified by ICA-AROMA
(those to be removed by the “aggressive” denoising strategy).
Particular details about ICA-AROMA are given below.
ICA-AROMA denoising is performed in
MNI152NLin6Asym space, which is automatically
added to the list of
--output-spaces if it was not already requested by the user.
The number of ICA-AROMA components depends on a dimensionality estimate made by
For datasets with a very short TR and a large number of timepoints, this may result
in an unusually high number of components.
By default, dimensionality is limited to a maximum of 200 components.
To override this upper limit one may specify the number of components to be extracted
Further details on the implementation are given within the workflow generation
Note: non-aggressive AROMA denoising is a fundamentally different procedure from its “aggressive” counterpart and cannot be performed only by using a set of noise regressors (a separate GLM with both noise and signal regressors needs to be used). Therefore instead of regressors, fMRIPrep produces non-aggressive denoised 4D NIFTI files in the MNI space:
Additionally, the MELODIC mix and noise component indices will
be generated, so non-aggressive denoising can be manually performed in the T1w space with
fsl_regfilt -i sub-<subject_label>_task-<task_id>_space-T1w_desc-preproc_bold.nii.gz \ -f $(cat sub-<subject_label>_task-<task_id>_AROMAnoiseICs.csv) \ -d sub-<subject_label>_task-<task_id>_desc-MELODIC_mixing.tsv \ -o sub-<subject_label>_task-<task_id>_space-T1w_desc-AROMAnonaggr_bold.nii.gz
Note: The non-steady state volumes are removed for the determination of components in melodic.
*MELODIC_mixing.tsv may have zero padded rows to account for the volumes not used in
melodic’s estimation of components.
A visualization of the AROMA component classification is also included in the HTML reports.
T2*-driven echo combination¶
If multi-echo BOLD data is supplied, this workflow uses the tedana T2* workflow to generate an adaptive T2* map and optimally weighted combination of all supplied single echo time series. This optimally combined time series is then carried forward for all subsequent preprocessing steps.