From b8e82b104a720953a968660b1002d2842be26cdc Mon Sep 17 00:00:00 2001 From: Aamir Sohail Date: Thu, 23 Jan 2025 17:09:57 +0000 Subject: [PATCH] tidied workshops and README --- LICENSE | 2 +- README.md | 20 +++++++--------- docs/workshop1/workshop1-intro.md | 4 ++-- docs/workshop2/mri-data-formats.md | 6 ++--- docs/workshop2/visualizing-mri-data.md | 25 ++++++++++---------- docs/workshop2/workshop2-intro.md | 2 +- docs/workshop3/diffusion-intro.md | 8 +++---- docs/workshop3/diffusion-mri-analysis.md | 13 +++++----- docs/workshop3/workshop3-intro.md | 4 ++-- docs/workshop4/probabilistic-tractography.md | 12 ++++------ docs/workshop5/first-level-analysis.md | 11 +++++---- docs/workshop5/preprocessing.md | 15 ++++++------ docs/workshop6/running-containers.md | 14 +++++------ docs/workshop7/advanced-fmri-tools.md | 2 +- docs/workshop7/higher-level-analysis.md | 20 ++++++++-------- docs/workshop7/workshop7-intro.md | 6 ++--- docs/workshop8/functional-connectivity.md | 13 +++++----- 17 files changed, 87 insertions(+), 90 deletions(-) diff --git a/LICENSE b/LICENSE index 0ae69ce..936f591 100644 --- a/LICENSE +++ b/LICENSE @@ -23,4 +23,4 @@ You do not have to comply with the license for elements of the material in the p No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material. -© 2024 University of Birmingham \ No newline at end of file +© 2025 University of Birmingham \ No newline at end of file diff --git a/README.md b/README.md index 984440a..3dc2f21 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@ ![GitHub stars](https://img.shields.io/github/stars/chbh-opensource/mri-on-bear-edu) ![GitHub repo size](https://img.shields.io/github/repo-size/chbh-opensource/mri-on-bear-edu) -> 🎯 **Purpose:** A freely available resource for the 'Magnetic Resonance Imaging in Cognitive Neuroscience' course at the Centre for Human Brain Health +> **Purpose:** A freely available resource for the 'Magnetic Resonance Imaging in Cognitive Neuroscience' course at the Centre for Human Brain Health Welcome to the MRI-on-BEAR website, a freely available resource created by researchers at the [Centre for Human Brain Health](https://www.birmingham.ac.uk/research/centre-for-human-brain-health), University of Birmingham. The website is made for students on the 'Magnetic Resonance Imaging in Cognitive Neuroscience' course, but may also be useful to external students and researchers. However, please BEAR in mind that the course materials were designed to run on computing resources at the University of Birmingham! @@ -41,25 +41,23 @@ Whilst not mandatory, additional resources related to the course can also be fou If you encounter any issues or have questions please contact the following: -### 📚 Course-related Issues +### Course-related Issues - **Dr Magda Chechlacz** (Module Lead) - - 📧 [m.chechlacz@bham.ac.uk](mailto:m.chechlacz@bham.ac.uk) + - [m.chechlacz@bham.ac.uk](mailto:m.chechlacz@bham.ac.uk) -### 💻 IT/Computing Support +### IT/Computing Support - **Charnjit Sidhu** (Lead Computing Officer, CHBH) - - 📧 [c.sidhu@bham.ac.uk](mailto:c.sidhu@bham.ac.uk) + - [c.sidhu@bham.ac.uk](mailto:c.sidhu@bham.ac.uk) -### 🌐 Website/GitHub enquiries +### Website/GitHub enquiries - **Aamir Sohail** (Module Teaching Assistant) - - 📧 [axs2210@bham.ac.uk](mailto:axs2210@bham.ac.uk) + - [axs2210@bham.ac.uk](mailto:axs2210@bham.ac.uk) -> ⏰ Please contact us during working hours (9am-5pm)! - -## 🛠️ Contributing (for CHBH staff) +## Contributing (for CHBH staff) If you teach on the course, or are a staff member at the CHBH and could like to contribute to the website, please follow the instructions below. -### 🔧 Development Setup +### Development Setup You will firstly need to re-create the website locally. The website is built using [MkDocs](https://www.mkdocs.org/), which is nice and easy to work with. After cloning the repository, to install all the required `mkdocs` Python packages, use the provided `requirements.txt` file within the root of this repository. The recommendation for development is to do this inside a dedicated virtual environment. diff --git a/docs/workshop1/workshop1-intro.md b/docs/workshop1/workshop1-intro.md index bee4ef1..4f37b78 100644 --- a/docs/workshop1/workshop1-intro.md +++ b/docs/workshop1/workshop1-intro.md @@ -8,10 +8,10 @@ Linux OS is similar to other operating systems such as Mac OS and Windows, and c !!! success "Overview of Workshop 1" Topics for this workshop include: - - Introduction to the BEAR Portal + - An introduction to the BEAR Portal - Using the BlueBEAR Graphical User Interface (GUI) environment - Navigating files and directories in the BEAR Portal - - Introduction to Linux, and using basic Linux commands in the Terminal + - An introduction to Linux, and using basic Linux commands in the Terminal !!! danger "Pre-requisites for the workshop" Please ensure that you have completed the '[Setting Up](https://chbh-opensource.github.io/mri-on-bear-edu/setting-up/)' section of this course, as you will require access to the BEAR Portal for this workshop. diff --git a/docs/workshop2/mri-data-formats.md b/docs/workshop2/mri-data-formats.md index 83ee8df..fd3870f 100644 --- a/docs/workshop2/mri-data-formats.md +++ b/docs/workshop2/mri-data-formats.md @@ -108,9 +108,9 @@ cd 20191008#C4E7_dicom ls ``` -You should see a list of 7 sub-directories. Each top level DICOM directory contains sub-directories with each individual scan sequence. The structure of DICOM directories can vary depending on how it is stored/exported on different systems. The 7 sub-directories here contain data for four localizer scans/planning scans, two fMRI scans and one structural scan. Each sub-directory contains several `.dcm` files. +You should see a list of 7 sub-directories. Each top level DICOM directory contains sub-directories with each individual scan sequence. The structure of DICOM directories can vary depending on how it is stored/exported on different systems. The 7 sub-directories here contain data for four localizer scans/planning scans, two fMRI scans and one structural scan. Each sub-directory also contains several `.dcm` files. -There are several software packages which can be used to convert DICOM to NIfTI, but `dcm2niix` is the most widely used. It is available as standalone software, or part of [MRIcroGL](https://www.nitrc.org/plugins/mwiki/index.php/mricrogl:MainPage) a popular tool for brain visualization similar to FSLeyes. `dcm2niix` is available on BlueBEAR, but to use it you need to load it first using the terminal. +There are several software packages which can be used to convert DICOM to NIfTI, but `dcm2niix` is the most widely used. It is available as standalone software, or part of [MRIcroGL](https://www.nitrc.org/plugins/mwiki/index.php/mricrogl:MainPage), a popular tool for brain visualization similar to FSLeyes. `dcm2niix` is available on BlueBEAR, but to use it you need to load it first using the terminal. To do this, in the terminal type: @@ -129,7 +129,7 @@ If you now check the `T1_vol_v1_5` sub-directory, you should find there a single !!! example "Converting more MRI data" Now try to convert to NIfTI the `.dcm` files from the scanning session `20221206#C547` with 3 DICOM sub-directories, the two diffusion scans `diff_AP` and `diff_PA` and one structural scan MPRAGE. - To do this, you will first need to change current directory, unzip, change directory again and then run the `dcm2niix` command as above. + To do this, you will first need to change the current directory, unzip, change the directory again and then run the `dcm2niix` command as above. If you have done it correctly you will find `.nii` and `.json` files generated in the structural sub-directories, and in the diffusion sub-directories you will also find `.bval` and `.bvec` files. diff --git a/docs/workshop2/visualizing-mri-data.md b/docs/workshop2/visualizing-mri-data.md index b6bd20a..5de106e 100644 --- a/docs/workshop2/visualizing-mri-data.md +++ b/docs/workshop2/visualizing-mri-data.md @@ -19,7 +19,7 @@ To open FSLeyes, type: `module load FSL/6.0.5.1-foss-2021a-fslpython` -There are different version of FSL on BlueBEAR, however this is the one which you need to use it together with FSLeyes. +There are different versions of FSL on BlueBEAR, however this is the one which you need to use it together with FSLeyes. Wait for FSL to load and then type: @@ -41,7 +41,12 @@ You should then see the setup below, which is the default FSLeyes viewer without You can now load/open an image to view. Click 'File' → 'Add from file' (and then select the file in your directory e.g., `rds/projects/c/chechlmy-chbh-mricn/xxx/CHBH/visualization/T1.nii`). -You can also type directly in the terminal `fsleyes file.nii.gz` where you replace `file.nii.gz` with the name of the actual file you want to open. +You can also type directly in the terminal: + +`fsleyes file.nii.gz` + +where you replace `file.nii.gz` with the name of the actual file you want to open. + However, you will need to include the full path to the file if you are not in the same directory when you open the terminal window e.g. `fsleyes rds/projects/c/chechlmy-chbh-mricn/xxx/CHBH/visualization/T1.nii` You should now see a T1 scan loaded in ortho view with three canvases corresponding to the sagittal, coronal, and axial planes. @@ -141,7 +146,7 @@ Now select from the menu 'Settings' → 'Ortho View 1' and tick the box for 'Atl You should now see the 'Atlases' panel open as shown below.

- FSLeyes atlas GUI + FSLeyes atlas GUI

The 'Atlases' panel is organized into three sections: @@ -231,8 +236,8 @@ Click the (Show/Hide) link after the Left Amygdala; the amygdala overlay will di If unsure check your results with someone else, or ask for help! -Make sure all overlays are closed (but keep the `MNI152_T1_2mm.nii.gz` open) before moving to the next section. - +!!! warning "Before continuing" + Make sure all overlays are closed (but keep the `MNI152_T1_2mm.nii.gz` open) before moving to the next section. ## Using atlas tools to find a brain structure @@ -251,7 +256,7 @@ Now click on the '+' button next to the tick box. This will centre the viewing c

!!! example "Exercise: Atlas visualization" - Now try this for yourself: + Now try the following exercises for yourself: - Remove the Heschl's Gyrus visualization. You can tick it off in the 'Atlases' window, or select Heschl's Gyrus in the 'Overlay list' window, and then either toggle its visibility off (click the eye icon) or remove it ('Menu' → 'Overlay' → 'Remove'). - Visualize the Lingual Gyrus and Left Hippocampus. To avoid confusion, change the colour of the Lingual Gyrus visualization from red/yellow to green and Left Hippocampus to blue. @@ -271,16 +276,12 @@ This time, in the left panel listing different atlases, tick on the option for o Harvard Oxford

-
- Now you should see all of the areas covered by the Harvard-Oxford cortical atlas shown on the standard brain. You can click around with the cursor, the labels for the different areas can be seen in the bottom right panel.

Harvard Oxford Atlas

-
- In addition to atlases covering various grey matter structures, there are also two white matter atlases: the JHU ICBM-DTI-81 white-matter labels atlas & JHU white-matter tractography atlas. If you tick (select) these atlases as per previous instructions (hint using the 'Atlas search' tab), you will see a list of all included white matter tracts (pathways) as shown below: @@ -319,7 +320,7 @@ Wait for FSLeyes to load, then: You should now see the MFG overlay in the overlay list (as below) and have a `MFG.nii.gz` file in the `ROImasks` directory. You can check this by typing `ls` in the terminal.

- MFG ROI + MFG ROI

We will now create a white matter mask. Here are the steps: @@ -331,7 +332,7 @@ We will now create a white matter mask. Here are the steps: You should now see the FM overlay in the overlay list (as below) and also have a `FM.nii.gz` file in the `ROImasks` directory.

- MFG ROI + MFG ROI

You now have two “probabilistic ROI masks”. To use these masks for various analyses, you need to first binarize these images. diff --git a/docs/workshop2/workshop2-intro.md b/docs/workshop2/workshop2-intro.md index 93cfd17..a8775ad 100644 --- a/docs/workshop2/workshop2-intro.md +++ b/docs/workshop2/workshop2-intro.md @@ -8,7 +8,7 @@ In this workshop we will explore, MRI image fundamentals, MRI data formats, data - The fundamentals of MRI data, including file types and formats - Converting between different MRI data files (e.g., DICOM to NIFTI) - - Introduction to FSLeyes and basic navigation + - An introduction to FSLeyes and basic navigation - Loading atlases and creating regions-of-interest (ROIs) - Binarizing and thresholding ROIs diff --git a/docs/workshop3/diffusion-intro.md b/docs/workshop3/diffusion-intro.md index 81667e8..caed701 100644 --- a/docs/workshop3/diffusion-intro.md +++ b/docs/workshop3/diffusion-intro.md @@ -90,7 +90,7 @@ Let's view the content of the `bvals` and `bvecs` files by using the `cat` comma `cat blip_down.bval`

- Cat bval + Cat bval

The first number is 0. This indicates that indeed the first volume (volume 0) is a non-diffusion weighted image and the third volume (volume 2) is diffusion weighted volume with b=1500. @@ -118,7 +118,7 @@ All types of distortions need correction during pre-processing steps in diffusio The processing with these two tools is time and computing intensive. Therefore we will not run the distortion correction steps in the workshop but instead explore some principles behind it.

- Distortion Types + Distortion Types

For this, you are given distortion corrected data to conduct further analysis, diffusion tensor fitting and probabilistic tractography. @@ -203,11 +203,11 @@ and then open the 'BET brain extraction tool' by clicking on it in the GUI. In either case, once BET is opened, click on advanced options and make sure the first two outputs are selected ('brain extracted image' and 'binary brain mask') as below. Select as the 'Input' image the previously created `nodif.nii.gz` and change 'Fractional Intensity Threshold' to 0.4. Then click the 'Go' button.

- BET GUI + BET GUI

- BET GUI Detailed + BET GUI Detailed

!!! tip "Completing BET in the terminal" diff --git a/docs/workshop3/diffusion-mri-analysis.md b/docs/workshop3/diffusion-mri-analysis.md index 15c7fee..5549b4e 100644 --- a/docs/workshop3/diffusion-mri-analysis.md +++ b/docs/workshop3/diffusion-mri-analysis.md @@ -27,13 +27,13 @@ To run the diffusion tensor fit, you need 4 files as specified below: In the FSL GUI, first click on 'FDT diffusion', and in the FDT window, select 'DTIFIT Reconstruct diffusion tensors'. Now choose as 'Input directory' the `data` subdirectory located inside `p01` and click 'Go'.

- DTIfit GUI + DTIfit GUI

You should see something happening in the terminal and once you see 'Done!' you are ready to view the results.

- DTIfit Done + DTIfit Done

Click 'OK' when the message appears. @@ -72,6 +72,7 @@ This would be useful if you want to write a script; we will look at it in the la Again, please do NOT run it now but try it in your own time with data in the `p02` folder. The results of running DTIfit are several output files as specified below. We will look closer at the highlighted files in bold. + All of these files should be located in the `data` subdirectory, i.e. within `/rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p01/data/`. | Output File | Description | @@ -127,7 +128,7 @@ The steps for Tract-Based Spatial Statistics are: 5. Each participant’s aligned FA map is then projected back onto the skeleton prior to statistical analysis 6. Hypothesis testing (voxelwise statistics) -To save time, some of the pre-processing stages including generating FA maps (tensor fitting), preparing data for analysis, registration of FA maps and skeletonization have been run for you and all outputs are included in the `data` folder you have copied at the start of this workshop. +To save time, some of the pre-processing stages including generating FA maps (tensor fitting), preparing data for analysis, registration of FA maps and skeletonization have been run for you and all outputs are included in the `data` folder you have copied at the start of this workshop.

Tract-Based Spatial Statistics analysis pipeline

@@ -187,7 +188,7 @@ cd FA imglob *_FA.* ``` -You should see data from the 5 older (o1-o5) followed by data fromthe 10 (y1-y10) younger participants. +You should see data from the 5 older (o1-o5) followed by data from the 10 (y1-y10) younger participants. Next navigate back to the `stats` folder and open FSL: @@ -200,7 +201,7 @@ fsl & Click on 'Miscellaneous tools' and select 'GLM Setup' to open the GLM GUI.

- TBSS GUI + TBSS GUI

In the workshop we will set up a simple group analysis (a two sample unpaired t-test). @@ -284,7 +285,7 @@ You should see the same results as below: Are the results as expected? Why/why not? !!! example "Reviewing the tstat1 image" - Next review the `tbss_tfce_corrp_tstat1.nii.gz` + Next review the image `tbss_tfce_corrp_tstat1.nii.gz`. !!! info "Further information on TBSS" More information on TBSS, can be found on the 'TBSS' section of the FSL Wiki: [https://fsl.fmrib.ox.ac.uk/fsl/docs/#/diffusion/tbss](https://fsl.fmrib.ox.ac.uk/fsl/docs/#/diffusion/tbss) \ No newline at end of file diff --git a/docs/workshop3/workshop3-intro.md b/docs/workshop3/workshop3-intro.md index 6d87734..bb60d77 100644 --- a/docs/workshop3/workshop3-intro.md +++ b/docs/workshop3/workshop3-intro.md @@ -11,8 +11,8 @@ By the end of the two workshops, you should be able to understand the principles - Visualizing diffusion data using FSLeyes (before and after distortion correction) - Using FSL's Brain Extraction Tool (BET) to create a brain mask - - Understand and perform diffusion tensor fitting (DTIfit) to generate key diffusion metrics like FA (Fractional Anisotropy) and MD (Mean Diffusivity) - - Learn to conduct Tract-Based Spatial Statistics (TBSS) for group-level comparisons of diffusion data + - Performing diffusion tensor fitting (DTIfit) to generate key diffusion metrics like FA (Fractional Anisotropy) and MD (Mean Diffusivity) + - Learning to conduct Tract-Based Spatial Statistics (TBSS) for group-level comparisons of diffusion data We will be working with various previously acquired datasets (similar to the data acquired during the CHBH MRI Demonstration/Site visit). We will not go into details as to why and how specific sequence parameters and specific values of the default settings have been chosen. Some values should be clear to you from the lectures or assigned on Canvas readings, please check there, or if you are still unclear, feel free to ask. diff --git a/docs/workshop4/probabilistic-tractography.md b/docs/workshop4/probabilistic-tractography.md index 455994c..3038ed4 100644 --- a/docs/workshop4/probabilistic-tractography.md +++ b/docs/workshop4/probabilistic-tractography.md @@ -55,7 +55,7 @@ Then in FSLeyes: This will likely show that in this case the default brain extraction was good. The reason behind such a good brain extraction with default options is a small FOV and data from a young healthy adult. This is not always the case e.g., when we have a large FOV or data from older participants. !!! note "More brain extraction to come? You BET!" - In the next workshop (Workshop 5) we will explore different BET [options] and how to troubleshoot brain extraction. + In the next workshop (Workshop 5) we will explore different BET options and how to troubleshoot brain extraction. ## Preparing our data with BEDPOSTX @@ -68,7 +68,7 @@ To run it, you would need to open FSL GUI, click on FDT diffusion and from drop

BEDPOSTX Shell
- In case of the data being used for this workshop with a single b-value, we need to specify the single-shell model. + In case of the data being used for this workshop with a single b-value, we need to specify the single-shell model.

After the workshop, in your own time, you could run it using the provided data (see Tractography Exercises section at the end of workshop notes). @@ -87,8 +87,6 @@ Typically, registration will be run between three spaces: This step has been again run for you. To run it, you would need to open FSL GUI, click on 'FDT diffusion' and from the drop down menu select 'Registration'. The main structural image would be your ”skull-stripped” T1 (`T1_brain`) and non-betted structural image would be T1. Plus you need to select `data.bedpostX` as the 'BEDPOSTX directory'. -
-

BEDPOSTX

@@ -160,13 +158,13 @@ Close FDT toolbox and then open it again from the terminal to make sure you don In the FDT Toolbox window - before you select your input in the 'Data' tab - go to the 'Options' tab (as below) and reduce the number of samples to 500 under 'Options'. You would normally run 5000 (default) but reducing this number will speed up processing and is useful for exploratory analyses.

- PROBTRACKX Options + PROBTRACKX Options

Now going back to the 'Data' tab (as below) do the following:

- PROBTRACKX Data + PROBTRACKX Data

1. Select `data.bedpostX` as 'BEDPOSTX directory' @@ -183,7 +181,7 @@ Now going back to the 'Data' tab (as below) do the following: It will take significantly longer this time to run the tractography in standard space. However, once it has finished, you will see the window 'Done!/OK'. Before proceeding, click 'OK'. -A new subdirectory will be created with the chosen output name `MotorThalamusM1`. Check the contents of this subdirectory. It contains slightly different files compared to the previous tractography output. The main output, the streamline density map is called `fdt_paths.nii.gz`. There is also a file called `waytotal` that contains the total number of valid streamlines runs. +A new subdirectory will be created with the chosen output name `MotorThalamusM1`. Check the contents of this subdirectory. It contains slightly different files compared to the previous tractography output. The main output, the streamline density map is called `fdt_paths.nii.gz`. There is also a file called `waytotal` that contains the total number of valid streamlines runs. We will now explore the results from both tractography runs. First close FDT and your terminal as we need FSLeyes, which cannot be loaded together with the current version of FSL. diff --git a/docs/workshop5/first-level-analysis.md b/docs/workshop5/first-level-analysis.md index 78fee41..59cc124 100644 --- a/docs/workshop5/first-level-analysis.md +++ b/docs/workshop5/first-level-analysis.md @@ -176,7 +176,7 @@ FEAT has a built-in progress watcher, the 'FEAT Report', which you can open in a To do that, you need to navigate inside the `p01_s1.feat` folder from the BlueBEAR Portal as below and from there select the `report.html` file, and either open it in a new tab or in a new window.

- FEAT Directory + FEAT Directory

Watch the webpage for progress. Refresh the page to update and click the links (the tabs near the top of the page) to see the results when available (the 'STILL RUNNING' message will disappear when the analysis has finished). @@ -223,7 +223,7 @@ Let's have a look and see the effects that other parameters have on the data. To - High pass filter: set to 30sec (i.e. 50% less than OFF+ON time period). - Hit 'Go' -Note that each time you rerun Feat, it creates a new folder with a '+' sign in the name. So you will have folders rather messily named 'p01_s1.feat', 'p01_s1+.feat', 'p01_s1++.feat', and so on. This is rather wasteful of of your precious quota space, so you should delete unnecessary ones after looking at them. +Note that each time you rerun FEAT, it creates a new folder with a '+' sign in the name. So you will have folders rather messily named `p01_s1.feat`, `p01_s1+.feat`, `p01_s1++.feat`, and so on. This is rather wasteful of of your precious quota space, so you should delete unnecessary ones after looking at them. For example, if you wanted to remove all files and directories that end with '+' for participant 1: @@ -262,10 +262,11 @@ Now change the input 4D file, the output directory name, and the registration de There are therefore 29 separate analyses that need to be done. - - Analyze each of these 29 fMRI runs independently and put the output of each one into a separate, clearly labelled directory as suggested above. - - Try and get all these done before the next fMRI workshop in week 10 on higher level fMRI analysis as you will need this processed data for that workshop. You have two weeks to complete this task. + Analyze each of these 29 fMRI runs independently and put the output of each one into a separate, clearly labelled directory as suggested above. + + Try and get all these done before the next fMRI workshop in week 10 on higher level fMRI analysis as you will need this processed data for that workshop. You have two weeks to complete this task. !!! tip "Scripting your analysis" - It will seem laborious to re-write and re-run 29 separate FEAT analyses; a much quicker way is by scripting our analyses using `bash`. If you would like, try scripting your analyses! Contact one of the course TA's or convenors if you are stuck! + It will seem laborious to re-write and re-run 29 separate FEAT analyses; a much quicker way is by scripting our analyses using `bash`. If you would like, try scripting your analyses! We will learn more about `bash` scripting in [the next workshop](https://chbh-opensource.github.io/mri-on-bear-edu/workshop6/workshop6-intro/). As always, help and further information is also available on the relevant section of the [FSL Wiki](https://fsl.fmrib.ox.ac.uk/fsl/docs/#/task_fmri/feat/index). diff --git a/docs/workshop5/preprocessing.md b/docs/workshop5/preprocessing.md index 4d7adf0..e8a0f82 100644 --- a/docs/workshop5/preprocessing.md +++ b/docs/workshop5/preprocessing.md @@ -15,7 +15,7 @@ A few extra seconds of “off” (6-8s) were later added at the very end of the Design

-Normally in any experiment it is very important to keep all the protocol parameters fixed when acquiring the neuroimaging data. +Normally in any experiment it is very important to keep all the protocol parameters fixed when acquiring the neuroimaging data. However, in this case we can see different parameters being used which reflect slightly different “best choices” made by different operators over the yearly demonstration sessions: - The repetition time and voxel size were the same for all scans: (TR = 2000 ms, voxel size 2.5 x 2.5 x 2.5mm). @@ -47,8 +47,6 @@ You now need to create a copy of the reconstructed fMRI data to be analysed duri We will now look at how to ”skull-strip” the T1 image (remove the skull and non-brain areas), as this step is needed as part of the registration step in the fMRI analysis pipeline. We will do this using FSL's BET on the command line. As you should know from previous workshops the basic command-line version of BET is: -(do not type this command, this is just a reminder) - `bet [options]` where: @@ -62,7 +60,9 @@ where: If the fMRI data has finished copying over, you can use the same terminal which you have previously opened. If not, keep that terminal open and instead open a new terminal, navigating inside your MRICN project folder (i.e., `/rds/projects/c/chechlmy-chbh-mricn/xxx`) -Next you need to copy the data for this part of the workshop. As there is only 1 file, it will not take a long time. Type: +Next you need to copy the data for this part of the workshop. As there is only 1 file, it will not take a long time. + +Type: `cp -r /rds/projects/c/chechlmy-chbh-mricn/module_data/BET/ .` @@ -186,11 +186,10 @@ immv fs005a001 fmri2 ``` !!! example "Renaming files" - Notes: - - - The 'immv' command is a special FSL Linux command that works just like the standard Linux `mv` command except that it automatically takes care of the filename extensions. It saves from having to write out: + The `immv` command is a special FSL Linux command that works just like the standard Linux `mv` command except that it automatically takes care of the filename extensions. It saves from having to write out: `mv fs004a001.nii.gz fmri1.nii.gz` which would be the standard Linux command to rename a file. - - You can of course name these files to anything you want. In principle, you could call the fMRI scan `run1` or `fmri_run1` or `epi1` or whatever. The important thing is that you need to be extremely consistent in the naming of files for the different participants. + + You can of course name these files to anything you want. In principle, you could call the fMRI scan `run1` or `fmri_run1` or `epi1` or whatever. The important thing is that you need to be extremely consistent in the naming of files for the different participants. For this workshop we will use the naming convention above and call the files `fmri1.nii.gz` and `fmri2.nii.gz`. diff --git a/docs/workshop6/running-containers.md b/docs/workshop6/running-containers.md index 77f09b6..ed23a03 100644 --- a/docs/workshop6/running-containers.md +++ b/docs/workshop6/running-containers.md @@ -25,7 +25,7 @@ A script can be very simple, containing just commands that you already know how You can start a new script by clicking on “New File” and naming it for example “`my_script.sh`” and next clicking on “Edit” to start typing commands you want to use. You can also use “Edit” to edit existing scripts.

- Edit Scripts + Edit Scripts

!!! tip "The shebang" @@ -60,12 +60,12 @@ Whether you are editing or creating a new script, you need to save it. After sav Next you need to make the script executable (as below) and remember the script will run in the current directory (`pwd`). You also need to make the script executable if you copied a script from someone else. -To make your script executable type in terminal: `chmod a+x brainvols.sh` +To make your script executable type in your terminal: `chmod a+x brainvols.sh` !!! warning "Running the script without permissions" If you try to run the script without making it executable, you will get a permission error. -To run the script, type in your terminal `./brainvols.sh` +To run the script, type in your terminal: `./brainvols.sh` You can now tell which participant has the biggest brain. @@ -75,7 +75,7 @@ The previous script hopefully worked. But it is not very elegant and is not much - Also, it would be helpful to print out some text so that we know which line of output on the screen relates to which participant. -Bash has a `for … do ... done` construct to do the former and an `echo` command to do the latter. So, let's use these to create an improved script with a loop. This is illustrated in the example `brainvols2.sh`: +Bash has a `for/do/done` construct to do the former and an `echo` command to do the latter. So, let's use these to create an improved script with a loop. This is illustrated in the example `brainvols2.sh`: ```bash #!/bin/bash @@ -88,7 +88,7 @@ do done ``` -Both examples above assume that you have already run BET (brain extraction) on T1 scans. But of course, you could also automate the process of brain extraction and complete both tasks, i.e., running bet and calculate volume, using a single script. This is illustrated in the example `bet_brainvols.sh`: +Both examples above assume that you have already run BET (brain extraction) on T1 scans. But of course, you could also automate the process of brain extraction and complete both tasks, i.e., running bet and calculate volume, using a single script. This is illustrated in the example `bet_brainvols.sh`: ```bash #!/bin/bash @@ -126,7 +126,7 @@ Some of the most powerful scripting comes when manipulating FEAT model files. Try as many of the suggestions below as you have time for. I suggest trying 1 or 2 at the start of the session and returning to these later if you have time, or outside the workshop. (You can load up an existing model by running FEAT and using the 'Load' button. The model file will be called `design.fsf` and can be found in the `.feat` or `.gfeat` directory of an existing analysis folder.) +Try as many of the suggestions below as you have time for. Try 1 or 2 at the start of the session and returning to these later if you have time, or outside the workshop. (You can load up an existing model by running FEAT and using the 'Load' button. The model file will be called `design.fsf` and can be found in the `.feat` or `.gfeat` directory of an existing analysis folder.) 1. Ordinary Least Squares (OLS) vs FLAME: Repeat the third level group analyses from FSL Workshop 5, but on the 'Stats' tab select 'Mixed Effects: Simple OLS' diff --git a/docs/workshop7/higher-level-analysis.md b/docs/workshop7/higher-level-analysis.md index 16daefa..1da2d32 100644 --- a/docs/workshop7/higher-level-analysis.md +++ b/docs/workshop7/higher-level-analysis.md @@ -13,7 +13,7 @@ If you have correctly followed the instructions from the previous workshop, you (where XXX = your particular login (ADF) username). -For participants 1 and 2 you should have only one FEAT directory. For participants 3-4 and 6-15 you should have 2 FEAT directories. For participant 5 you should have 3 FEAT directories. You should therefore have 29 complete first level feat directories. +For participants 1 and 2 you should have only one FEAT directory. For participants 3-4 and 6-15 you should have 2 FEAT directories. For participant 5 you should have 3 FEAT directories. You should therefore have 29 complete first level FEAT directories. If you haven’t done so already, please check that the output of each and all of these first level analyses looks ok either through the FEAT Report or through FSLeyes. If you would like to use the FEAT Report, select the report (called `report.html`) from within each FEAT directory from your analysis, e.g.,: @@ -67,7 +67,7 @@ Now fill out the tabs as below:

Stats

-Choose 'Fixed effects' from the pull down menu at the top. +- Choose 'Fixed effects' from the pull down menu at the top. It is necessary to select this now in order to reduce the number of inputs on the 'Data' tab to be only 2 (the default for all other higher level model types is a minimum no of 3 inputs). Note that choosing 'Fixed effects' will ignore cross scan variance, which is fine to do here because these are scans from the same person at the same time. @@ -137,7 +137,7 @@ In FSL, the procedure for setting up an analysis across participants is very sim In this demonstration experiment, 12 participants did the scan twice, 1 was scanned three times, and 2 did the scan only once. (Note that it should be rather obvious that this is not an ideal design for a real experiment). In our case, we have averaged within participants and now we will combine these second level analyses with the first level analyses from those participants who were only scanned once. -Close FEAT if you still have it open. Then open it again by typing `Feat &` +Close FEAT if you still have it open. Then open it again by typing `Feat &`. !!! note "Don't close the terminal if you don't have to!" Please note that if you close the terminal here you will first need to load FSL again and navigate back to your folder! @@ -194,7 +194,7 @@ For example, you can then put this current 3rd level output in the subdirectory: - Click the Done button. Check and dismiss the pop-up window

- GLM Third Level + GLM Third Level

Post-Stats

@@ -233,7 +233,7 @@ Now complete the tabs following the instructions below: - In the form that appears fill in the lines with the `.feat` directory from the first level analysis from the first participant (scan 1), followed by the second participant (scan 1), followed by the third participant (scan 1, and then scan 2) and so on down to participant 15, scan 2. It is important that you enter these in strict logical order, starting with `p01` as in this example below:

- FEAT Directories + FEAT Directories

- When this is done, hit the OK button. @@ -260,7 +260,7 @@ and should be meaningfully named. For example, you could call it: GLM EVs

-On the 'Contrasts and F-tests' tab, we also need 15 contrasts to represent the 15 subject means. Fill in the tab as below: +- On the 'Contrasts and F-tests' tab, we also need 15 contrasts to represent the 15 subject means. Fill in the tab as below:

GLM Contrasts @@ -269,7 +269,7 @@ On the 'Contrasts and F-tests' tab, we also need 15 contrasts to represent the 1 !!! caution "Check boxes in FSL" In the older versions of FSL after selecting an option you will see a yellow checkbox, however in the newer versions of FSL such as the one we are using, the checkbox is yellow to start with, and after selecting option you will see a tick ✔️ inside the yellow checkbox. -Click the Done button. This will produce a schematic of your new second level design. +- Click the Done button. This will produce a schematic of your new second level design.

Design Matrix @@ -279,9 +279,9 @@ Check it makes sense, and that you understand what it is showing, then close its

Post-Stats

-Accept the defaults. +- Accept the defaults. -Now click the Go button! +- Now click the Go button! Wait for the analysis to complete and then look at the results. Note that the output folder is called `level2all.gfeat` if you named it as above. You will have to click on the link marked 'Results' and then on the link labelled 'Lower-level contrast 1 (vision)' on that page and then on the 'Post-stats' link. @@ -309,7 +309,7 @@ To do this, follow the steps below: - Press the button marked 'Select cope images' - In the dialogue that appears you need to add in the path to the COPE image for each of the participants in the second level analysis we have just performed. -If you have used the correct naming convention above, this is: +If you have used the correct naming convention above, this will be: ```bash /rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/level2all.gfeat/cope1.feat/stats/cope1.nii.gz diff --git a/docs/workshop7/workshop7-intro.md b/docs/workshop7/workshop7-intro.md index d3421af..fe46e80 100644 --- a/docs/workshop7/workshop7-intro.md +++ b/docs/workshop7/workshop7-intro.md @@ -11,9 +11,9 @@ This workshop follows on from the [workshop on first-level fMRI analysis](https: !!! success "Overview of Workshop 6" Topics for this workshop include: - - How to combine scans when one participant has been scanned multiple-times (e.g., twice) within the same experiment - - How to combine scans across participants - - How to set up different higher-level (second and third-level) fMRI analyses + - Combining scans when one participant has been scanned multiple-times (e.g., twice) within the same experiment + - Combining scans across participants + - Setting up different higher-level (second and third-level) fMRI analyses As in the other workshops we will not discuss in detail why you might choose certain parameters. The aim of this workshop is to familiarise you with some of the available analysis tools. You are encouraged to read the pop-up help throughout (hold your mouse arrow over FSL GUI buttons and menus when setting your FEAT design), refer to your lecture notes lectures or resource list readings. diff --git a/docs/workshop8/functional-connectivity.md b/docs/workshop8/functional-connectivity.md index 53a3e66..8378db2 100644 --- a/docs/workshop8/functional-connectivity.md +++ b/docs/workshop8/functional-connectivity.md @@ -1,16 +1,15 @@ # Functional connectivity analysis of resting-state fMRI data using FSL -This workshop is based upon the excellent [FSL fMRI Resting State Seed-based Connectivity](https://neuroimaging-core-docs.readthedocs.io/en/latest/pages/fsl_fmri_restingstate-sbc.html) tutorial by Dianne Paterson at the University of Arizona, which has been adapted to run on the BEAR systems at the University of Birmingham, with some additional content covering [Neurosynth](https://neurosynth.org/). +This workshop is based upon the excellent [FSL fMRI Resting State Seed-based Connectivity](https://neuroimaging-core-docs.readthedocs.io/en/latest/pages/fsl_fmri_restingstate-sbc.html) tutorial, which has been adapted to run on the BEAR systems at the University of Birmingham, with some additional content covering [Neurosynth](https://neurosynth.org/). We will run a group-level functional connectivity analysis on resting-state fMRI data of three participants, specifically examining the functional connectivity of the posterior cingulate cortex (PCC), a region of the default mode network (DMN) that is commonly found to be active in resting-state data. -!!! success "Overview of Workshop 8" - To do this, we will: +To do this, we will: - - extract a mean-timeseries for a PCC seed region for each participant, - - run single-subject level analyses, one manually and bash scripting the other two, - - run a group-level analysis using the single-level results - - figure out which brain regions our active voxels are in, using atlases in FSL, and Neurosynth. +- extract a mean-timeseries for a PCC seed region for each participant, +- run single-subject level analyses, one manually and bash scripting the other two, +- run a group-level analysis using the single-level results +- figure out which brain regions our active voxels are in, using atlases in FSL, and Neurosynth. ## Preparing the data