Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QSIPrep not combining same-session dwi scans #413

Closed
pmd84 opened this issue May 17, 2022 · 11 comments
Closed

QSIPrep not combining same-session dwi scans #413

pmd84 opened this issue May 17, 2022 · 11 comments

Comments

@pmd84
Copy link

pmd84 commented May 17, 2022

Hi Matt,

First off - thanks for all the hard work you put into this program and for your responsiveness to user issues. Here is my issue:

Out lab collects DSI in 2 back-to-back runs: 53 volumes and 52 volumes each, for a scan total of 105 volumes containing 4 b0s. We also collect anatomical scans (T1) and a fieldmap. We want to combine these DSI scans into one concatenated scan during pre-processing. It is my understanding that QSIPrep should combine all dwi scans in the same session (ses-1). However, I have been unsuccessful at getting QSIPrep to concatenate scans - all outputs are always broken into run-1 and run-2.

I have tried using the -combine-all-dwis flag, but that is no longer a valid flag. I have also tried to use the -denoise-after-combining and/or -distortion-group-merge concat, but I get thrown the following error that I have been unable to resolve (See Error 1, listed below). When I add no flags related to combining files, such as in the code below, QSIPrep does not combine my dwi scans even though they are in the same session.

Would it be best to concatenate my files prior to sending to QSIPrep? If so, what would be best practice?

Please let me know if you see where I am making an error here, or if you have any suggestions.

Thank you

Code

#!/bin/bash

#User inputs:
bids_root_dir=/home/user/QSIPrep_directory
subj=BR371
nthreads=4
mem=25 #gb
container=docker #docker or singularity

#Convert virtual memory from gb to mb
mem=echo "${mem//[!0-9]/}"
mem_mb=echo $(((mem*1000)-5000)) #reduce some memory for buffer space during pre-processing

export FS_LICENSE=/home/user/license.txt

#Run qsiprep
qsiprep-docker $bids_root_dir $bids_root_dir/derivatives
participant --participant-label $subj
-w $bids_root_dir/derivatives/work/qsiprep
--mem_mb $mem_mb --nthreads $nthreads
--output-resolution 2.0 --output-space T1w
--stop-on-first-crash
--hmc-transform Affine --hmc_model 3dSHORE
--fs-license-file /home/user/license.txt

FileTree (passes bids validator)

├── dataset_description.json
├── derivatives
│   └── work
│   └── qsiprep
├── participants.tsv
├── README
└── sub-BR371
└── ses-1
├── anat
│   ├── sub-BR371_ses-1_T1w.json
│   └── sub-BR371_ses-1_T1w.nii.gz
├── dwi
│   ├── sub-BR371_ses-1_dir-AP_run-1_dwi.bval
│   ├── sub-BR371_ses-1_dir-AP_run-1_dwi.bvec
│   ├── sub-BR371_ses-1_dir-AP_run-1_dwi.json
│   ├── sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz
│   ├── sub-BR371_ses-1_dir-AP_run-2_dwi.bval
│   ├── sub-BR371_ses-1_dir-AP_run-2_dwi.bvec
│   ├── sub-BR371_ses-1_dir-AP_run-2_dwi.json
│   └── sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz
└── fmap
├── sub-BR371_ses-1_dir-PA_epi.json
├── sub-BR371_ses-1_dir-PA_epi.nii.gz

Note: epi file json file includes the following line to indicate that it should be used for both dwi runs:
"IntendedFor": ["ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz","ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz"]

Error 1 - when calling on -distortion-group-merge concat
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 344, in _send_procs_to_workers
self.procs[jobid].run(updatehash=updatehash)
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 521, in run
result = self._run_interface(execute=True)
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 639, in _run_interface
return self._run_command(execute)
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 750, in _run_command
raise NodeExecutionError(
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node concat.

Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 398, in run
runtime = self._run_interface(runtime)
File "/usr/local/miniconda/lib/python3.8/site-packages/qsiprep/interfaces/confounds.py", line 53, in _run_interface
combined_out, confounds_list = _gather_confounds(
File "/usr/local/miniconda/lib/python3.8/site-packages/qsiprep/interfaces/confounds.py", line 172, in _gather_confounds
raise Exception("Gradients don't match. File a bug report!")
Exception: Gradients don't match. File a bug report!

@cookpa
Copy link
Collaborator

cookpa commented May 17, 2022

What version of qsiprep is this? The issue might be related to #403

@pmd84
Copy link
Author

pmd84 commented May 17, 2022

What version of qsiprep is this? The issue might be related to #403

When I check my docker images, I see that I have two versions: 0.15.3 and 0.15.1. I will remove these images and pull the latest version, which appears to be 0.15.1 on the Github page.

If I want dwi scans to be combined, is it mandatory to use the --distortion-group-merge flag?

Thanks!

@cookpa
Copy link
Collaborator

cookpa commented May 17, 2022

I have had success with 0.14.3 and not specifying anything, just letting the default behavior (concat) happen.

There's some other open issues with 0.15.X so I've not upgraded yet.

To your last question, I don't know the code as well as Matt but I think you do need the option for your data. When using eddy, the scans get concatenated without any --distortion-group-merge because eddy requires concatenated input, but shoreline defaults to separating the scans, so you need the explicit option. I base this on:

#277 (comment)

@pmd84
Copy link
Author

pmd84 commented May 17, 2022

Thanks, Phil! I will try running 0.14.3 to see if that works better.

@pmd84
Copy link
Author

pmd84 commented May 18, 2022

I ran this data on 0.14.3 and encountered the following error:

220518-01:02:24,570 nipype.workflow WARNING:
[Node] Error on "qsiprep_wf.single_subject_BR371_wf.dwi_preproc_ses_1_dir_AP_wf. confounds_wf.concat" (/scratch/qsiprep_wf/single_subject_BR371_wf/dwi_preproc_ses_1_dir_A P_wf/confounds_wf/concat)
220518-01:02:24,571 nipype.workflow ERROR:
Node concat failed to run on host c118bb775f6d.
220518-01:02:24,574 nipype.workflow ERROR:
Saving crash info to /out/qsiprep/sub-BR371/log/20220517-202942_755182f9-e168-45 47-bd12-497b98620f65/crash-20220518-010224-root-concat-06c3b4ae-082a-41b0-8caf-5302ad3620 36.txt

Crash Report
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 344, in _send_procs_to_workers
self.procs[jobid].run(updatehash=updatehash)
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 521, in run
result = self._run_interface(execute=True)
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 639, in _run_interface
return self._run_command(execute)
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 750, in _run_command
raise NodeExecutionError(
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node concat.

Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 398, in run
runtime = self._run_interface(runtime)
File "/usr/local/miniconda/lib/python3.8/site-packages/qsiprep/interfaces/confounds.py", line 53, in _run_interface
combined_out, confounds_list = _gather_confounds(
File "/usr/local/miniconda/lib/python3.8/site-packages/qsiprep/interfaces/confounds.py", line 172, in _gather_confounds
raise Exception("Gradients don't match. File a bug report!")
Exception: Gradients don't match. File a bug report!

@cookpa
Copy link
Collaborator

cookpa commented May 18, 2022

OK, I'm going to see if I can reproduce this once #417 is merged.

@mattcieslak
Copy link
Collaborator

Hi @pmd84, are your run-1 and run-2 scanned with the same PhaseEncodingDirection and TotalReadoutTime? If not, they are in different distortion groups and will be processed differently. Also, you'd officially be the first person to try to concatenate SHORELine outputs from different PE directions! Currently SHORELine will only write out separate files, but you can combine these yourself afterwards with 3dTcat, fslmerge or mrcat.

@pmd84
Copy link
Author

pmd84 commented May 18, 2022

Hi Matt!

Yes, my run-1 and run-2 have the same PhaseEncodingDirection (-j) and TotalReadoutTime (0.060189) They do, however, have different number of volumes (53 and 52, respectively).

I wonder if there might be an issue using one fieldmap with 2 dwis, because I was able to get QSIPrep to run all the way through when I made a copy of the fieldmap and assigned one to each dwi (i.e. run-1_fmap with run-1_dwi and run-2_fmap with run-2_dwi), and I ended up with separate output files. Would this make sense as a potential issue?

Combining scans after preprocessing might be the best option if I can't get QSIPrep to combine them for me.

@mattcieslak
Copy link
Collaborator

aha! The fieldmap file can have "IntendedFor": ["dwi/run-1.nii.gz", "dwi/run-2.nii.gz"] with the actual file names. This will result in both runs being distortion corrected and possible concatenated

@pmd84
Copy link
Author

pmd84 commented May 18, 2022

I did have that field listed in my fieldmap json file, and unfortunately I have not yet gotten it to work.

"IntendedFor": ["ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz","ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz"]

@Oye-G
Copy link

Oye-G commented May 31, 2022

It seems you are missing the other fieldmap. You should have a pair of fieldmaps i.e. dir-AP and dir-PA. I am not sure if that is the culprit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants