diff --git a/search/search_index.json b/search/search_index.json index 80ebb0e..5e8bcf6 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"MRI on BEAR","text":"

MRI on BEAR is a collection of educational resources created by members of the Centre for Human Brain Health (CHBH), University of Birmingham, to provide a basic introduction to fundamentals in magnetic resonance imaging (MRI) data analysis, using the computational resources available to the University of Birmingham research community.

"},{"location":"#about-this-website","title":"About this website","text":"

This website contains workshop materials created for the MSc module 'Magnetic Resonance Imaging in Cognitive Neuroscience' (MRICN) and its earlier version - Fundamentals in Brain Imaging taught by Dr Peter C. Hansen - at the School of Psychology, University of Birmingham. It is a ten-week course consisting of lectures and workshops introducing the main techniques of functional and structural brain mapping using MRI with a strong emphasis on - but not limited to - functional MRI (fMRI). Topics include the physics of MRI, experimental design for neuroimaging experiments and the analysis of fMRI, and other types of MRI data. This website includes only the workshop materials, which provide a basic training in analysis of brain imaging data and data visualization.

Learning objectives

At the end of the course you will be able to:

For externals not on the course

Whilst we have made these resources publicly available for anyone to use, please BEAR in mind that the course has been specifically designed to run on the computing resources at the University of Birmingham.

"},{"location":"#teaching-staff","title":"Teaching Staff","text":"Dr Magdalena Chechlacz

Role: Course Lead

Magdalena Chechlacz is an Assistant Professor in Cognition and Ageing at the School of Psychology, University of Birmingham. She initially trained and carried out a doctorate in Cellular and Molecular Biology in 2002, and after working as a biologist at the University of California, San Diego, decided on a career change to a more human-oriented science and neuroimaging. She completed a second doctorate in psychology at the University of Birmingham under the supervision of Glyn Humphreys in 2012, and from 2013 to 2016, held a British Academy Postdoctoral Fellowship and EPA Cephalosporin Junior Research Fellowship at Linacre College, University of Oxford. In 2016, Dr Chechlacz returned to the School of Psychology, University of Birmingham as a Bridge Fellow.

Aamir Sohail

Role: Teaching Assistant

Aamir Sohail is an MRC Advanced Interdisciplinary Methods (AIM) DTP PhD student based at the Centre for Human Brain Health (CHBH), University of Birmingham, where he is supervised by Lei Zhang and Patricia Lockwood. He completed a BSc in Biomedical Science at Imperial College London, followed by an MSc in Brain Imaging at the University of Nottingham. He then worked as a Junior Research Fellow at the Centre for Integrative Neuroscience and Neurodynamics (CINN), University of Reading. Outside of research, he is also passionate about facilitating inclusivity and diversity in academia, as well as promoting open and reproducible science.

"},{"location":"#course-overview","title":"Course Overview","text":"Workshop Key Concepts/Tools Getting Started BEAR Portal, BEAR Storage, BlueBEAR Workshop 1 BlueBEAR GUI, Linux commands Workshop 2 MRI data formats, FSLeyes, MRI atlases Workshop 3 DTIfit, TBSS, BET Workshop 4 Probabilistic tractography, BEDPOSTX, PROBTRACKX Workshop 5 FEAT, First-level fMRI analysis Workshop 6 Bash scripting, Submitting jobs, Containers Workshop 7 Higher-level fMRI analysis, FEATquery Workshop 8 Resting-state fMRI, Functional connectivity, Neurosynth

Accessing additional course materials

If you are a CHBH member and would like access to additional course materials (lecture recordings etc.), please contact one of the teaching staff members listed above.

"},{"location":"#license","title":"License","text":"

MRI on BEAR is hosted on GitHub. All content in this book (i.e., any files and content in the docs/ folder) is licensed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license. Please see the LICENSE file in the GitHub repository for more details.

"},{"location":"contributors/","title":"Contributors","text":"

Many thanks to our contributors for creating and maintaining these resources!

Andrew Quinn\ud83d\udea7 \ud83d\udd8b Aamir Sohail\ud83d\udea7 \ud83d\udd8b James Carpenter\ud83d\udd8b Magda Chechlacz\ud83d\udd8b

Acknowledgements

Thank you to Charnjit Sidhu for their help and support in developing these materials!

"},{"location":"resources/","title":"Additional Resources","text":"

For those wanting to develop their learning beyond the scope of the module, here is a (non-exhaustive) list of links and pages for neuroscientists covering skills related to working with neuroimaging data, both with the concepts and practical application.

Contributing to the list

Feel free to suggest additional resources to the list by opening a thread on the GitHub page!

FSL Wiki

Most relevant to the course is the FSL Wiki, the comprehensive guide for FSL by the Wellcome Centre for Integrative Neuroimaging at the University of Oxford.

"},{"location":"resources/#existing-lists-of-resources","title":"Existing lists of resources","text":"

Here are some current 'meta-lists' which already cover a lot of resources themselves:

"},{"location":"resources/#neuroimaging","title":"Neuroimaging","text":"Conceptual understanding

Struggling to grasp the fundamentals of MRI/fMRI? Want to quickly refresh your mind on the physiological basis of the BOLD signal? Well, these resources are for you!

Analysis of fMRI data "},{"location":"resources/#programming","title":"Programming","text":"Unix/Linux "},{"location":"resources/#textbooks","title":"Textbooks","text":""},{"location":"setting-up/","title":"Accessing BlueBEAR and the BEAR Portal","text":"

Before you start with any workshop materials, you will need to familiarise yourself with the CHBH\u2019s primary computational resource, BlueBEAR. The following pages are aimed at helping you get started.

To put these workshop materials into practical use you will be expected to understand what BlueBEAR is, what it is used for and to make sure you have access.

Student Responsibility

If you are an MSc student taking the MRICN module, please note that while there will be help available during all in person workshops, in case you have any problems with using the BEAR Portal, it is your responsibility to make sure that you have access, and that you are familiar with the information provided in pre-workshop materials. Failing to gain an understanding of BlueBEAR and using the BEAR Portal will prevent you from participating in the practical sessions and completing the module\u2019s main assessment (data analysis).

"},{"location":"setting-up/#what-are-bear-and-bluebear","title":"What are BEAR and BlueBEAR?Signing in to the BEAR Portal","text":"

BEAR stands for Birmingham Environment for Academic Research and is a collection of services provided specifically for researchers at the University of Birmingham. BEAR services are used by researchers at the Centre for Human Brain Health (CHBH) for various types of neuroimaging data analysis.

BEAR services and basic resources - such as the ones we will be using for the purpose of the MRICN module - are freely available to the University of Birmingham research community. Extra resources which may be needed for some research projects can be purchased e.g., access to dedicated nodes and extra storage. This is something your PI/MSc/PhD project supervisor might be using and will give you access to.

BlueBEAR refers to the Linux High Performance Computing (HPC) environment which:

  1. Enables researchers to run jobs simultaneously on many servers (thus providing fast and efficient processing capacity for data analysis)
  2. Gives easy access to multiple apps, software libraries (e.g., software we will be using in this module to analyse MRI data), as well as various software development tools

As computing resources on BlueBEAR rely on Linux, in Workshop 1 you will learn some basic commands, which you will need to be familiar with to participate in subsequent practical sessions and to complete the module\u2019s main assessment (data analysis assessment). More Linux commands and basic principle of scriptings will be introduced in subsequent workshops.

There are two steps to gaining access to BlueBEAR:

Gaining access to BEAR Projects

Only a member of academic staff e.g., your project supervisor or module lead, can apply for a BEAR project. As a student you cannot apply for a BEAR project. If you are registered as a student on the MRICN module, you should have already been added as member to the project chechlmy-chbh-mricn. If not please contact one of the teaching staff.

Even if you are already a member of a BEAR project giving you BlueBEAR access, you will still need to activate your BEAR Linux account via the self-service route or the service desk form. The information on how to do it and step-by-step instructions are available on the University Advanced Research Computing page, see the following link.

Please follow these steps as above to make sure you have a BEAR Linux account before starting with workshop 1 materials. To do this you will need to be on campus or using the University Remote Access Service (VPN).

After you have activated your BEAR Linux account, you can now sign-in to the BEAR Portal.

BEAR Portal access requirements

Remember that the BEAR Portal is only available on campus or using the VPN!

If your log in is successful, you will be directed to the main BEAR Portal page as below. This means that you have successfully launched the BEAR Portal.

If you get to this page, you are ready for Workshop 1. For now, you can log out. If you have any problems logging on to BEAR Portal, please email chbh-help@contacts.bham.ac.uk for help and advice.

"},{"location":"setting-up/#bear-storage","title":"BEAR Storage","text":"

The storage associated with each BEAR project is called the BEAR Research Data Store (RDS). Each BEAR project gets 3TB of storage space for free, but researchers (e.g., your MSc project supervisor) can pay for additional storage if needed. The RDS should be used for all data, job scripts and output on BlueBEAR.

If you are registered as a student on the MRICN module, all the data and resources you will need to participate in the MRICN workshops and to complete the module\u2019s main assessment have been added to the MRICN module RDS, and you have been given access to the folder /rds/projects/c/chechlmy-chbh-mricn. When working on your MSc project using BEAR services, your supervisor will direct you to the relevant RDS project.

External access to data

If you are not registered on the module and would like access to the data, please contact one of the teaching staff members.

"},{"location":"setting-up/#finding-additional-information","title":"Finding additional information","text":"

There is extensive BEAR technical documentation provided by the University of Birmingham BEAR team (see links below). While for the purpose of this module, you are not expected to be familiar with all the provided there information, you might find it useful if you want to know more about computing resources available to researchers at CHBH via BEAR services, especially if you will be using BlueBEAR for other purposes (e.g., for your MSc project).

You can find out more about BEAR, BlueBEAR and RDS on the dedicated BEAR webpages:

"},{"location":"workshop1/intro-to-bluebear/","title":"Introduction to the BlueBEAR Portal","text":"

At this point you should know how to log in and access the main BEAR Portal page.

Please navigate to https://portal.bear.bham.ac.uk, log in and launch the BEAR Portal; you should get to the page as below.

BlueBEAR Portal is a web-based interface enabling access to various BEAR services and BEAR apps including:

BlueBEAR portal is basically a user friendly alternative to using the command line interface, your computer terminal.

To view all files and data you have access to on BlueBEAR, click on 'Files' as illustrated above. You will see your home directory (your BEAR Linux home directory), and all RDS projects you are a member of.

You should be able to see /rds/projects/c/chechlmy-chbh-mricn (MRICN module\u2019s RDS project). By selecting the 'Home Directory' or any 'RDS project' you will open a second browser tab, displaying the content. In the example below, you see the content of one of Magda's projects.

Inside the module\u2019s RDS project, you will find that you have a folder labelled xxx, where xxx is your University of Birmingham ADF username. If you navigate to that folder rds/projects/c/chechlmy-chbh-mricn/xxx, you will be able to perform various file operations from there. However, for now, please do not move, download, or delete any files.

Data confidentiality

Please also note that the MRI data you will be given to work with should be used on BlueBEAR only and not downloaded on your personal desktop or laptop!

"},{"location":"workshop1/intro-to-bluebear/#launching-the-bluebear-gui","title":"Launching the BlueBEAR GUI","text":"

The BlueBEAR Portal options in the menu bar, 'Jobs', 'Clusters' and 'My Interactive Sessions' can be used to submit and edit jobs to run on the BlueBEAR cluster and to get information about your currently running jobs and interactive sessions. Some of these processes can be also executed using the Code Server Editor (VS Code) accessible via Interactive Apps. We won\u2019t explore these options in detail now but some of these will be introduced later when needed.

For example, from the 'Cluster' option you can jump directly on to the BlueBEAR terminal and by using this built-in terminal, submit data analysis jobs and/or employ your own contained version of neuroimaging software rather than software already available on BlueBEAR. We will cover containers, scripting and submitting jobs in Workshop 6. For now, just click on this option and see what happens; you can subsequently exit/close the terminal page.

Finally, from the BlueBEAR Portal menu bar you can select 'Interactive Apps' and from there access various GUI applications you wish to use, including JupyterLab, RStudio, MATLAB and most importantly the BlueBEAR GUI, which we will be using to analyse MRI data in the subsequent workshops.

Please select 'BlueBEAR GUI'. This will bring up a page for you to specify options for your job to start the BlueBEAR GUI. You can leave some of these options as default. But please change \u201cNumber of Hours\u201d to 2 (our workshops will last 2 hours; for some other analysis tasks you might need more time) and make sure that the selected 'BEAR Project' is chechlmy-chbh-mricn. Next click on 'Launch'.

It will take few minutes for the job to start. Once it\u2019s ready you\u2019ll see an option to connect to the BlueBEAR GUI. Click on 'Launch BlueBEAR GUI'.

Once you have launched the BlueBEAR GUI, you are now in a Linux environment, on a Linux Desktop. The following section will guide you on navigating and using this environment effectively.

Re-launching the BlueBEAR GUI

In the main window of the BlueBEAR portal you will be able to see that you have an Interactive session running (the information above will remain there). This is important as if you close the Linux Desktop by mistake, you can click on Launch BlueBEAR GUI again to open it.

"},{"location":"workshop1/intro-to-linux/","title":"Introduction to Linux","text":"

Linux is a computer Operating System (OS) similar to Microsoft Windows or Mac OS. Linux is very widely used in the academic world especially in the sciences. It is derived from one of the oldest and most stable and used OS platforms around, Unix. We use Linux on BlueBEAR. Many versions of Linux are freely available to download and install, including CentOS (Community ENTerprise Operating System) and Debian, which you might be familiar with. You can also use these operating systems with Microsoft Windows in Dual Boot Environment on your laptop or desktop computer.

Linux and neuroimaging

Linux is particularly suited for clustering computers together and for efficient batch processing of data. All major neuroimaging software runs on Linux. This includes FSL, SPM, AFNI, and many others. Linux, or some version of Unix, is used in almost all leading neuroimaging centres. Both MATLAB and Python also run well in a Linux environment.

If you work in neuroimaging, it is to your advantage to become familiar with Linux. The more familiar you are, the more productive you will become. For some of you, this might be a challenge. The environment will present a new learning experience, one that will take time and effort to learn. But in the end, you should hopefully realize that the benefits of learning to work in this new computer environment are indeed worth the effort.

Linux is not like Windows or Mac OSX environments. It is best used by typing commands into a Terminal client and by writing small batch command programs. Frequently you may not even need to use the mouse. Using the Linux environment alone may take some getting used to, but will become more familar throughout the course, as we use them to navigate through our file system and to script our analyses. For now, we will simply explore using the Linux terminal and simple commands.

"},{"location":"workshop1/intro-to-linux/#using-the-linux-terminal","title":"Using the Linux Terminal","text":"

BlueBEAR GUI enables to load various apps and applications by using the Linux environment and a built-in Terminal client. Once you have launched the BlueBEAR GUI, you will see a new window and from there you can open the Terminal client. There are different ways to open Terminal in BlueBEAR GUI window as illustrated below.

Either by selecting from the drop-down menu:

Or by selecting the folder at the bottom of the screen:

In either case you will load the terminal:

Once you have started the terminal you, you will be able to load required applications (e.g., to start the FSL GUI). FSL (FMRIB Software Library) is a neuroimaging software package we will be using in our workshops for MRI data analysis.

When using the BlueBEAR GUI Linux desktop, you can simultaneously work in four separate spaces/windows. For example, if you are planning on using multiple apps, rather than opening multiple terminals and apps in the same space, you can move to another space. You can do that by clicking on \u201cworkspace manager\u201d in Linux desktop window.

Linux is fundamentally a command line-based operating system, so although you can use the GUI interface with many applications, it is essential you get used to issuing commands through the Terminal interface to improve your work efficiency.

Make sure you have an open Terminal as per the instructions above. Note that a Terminal is a text-based interface, so generally the mouse is not much use. You need to get used to taking your hand off the mouse and letting go of it. Move it away, out of reach. You can then get used to using both hands to type into a Terminal client.

[chechlmy@bear-pg0210u07a ~]$ as shown above in the Terminal Client is known as the system prompt. The prompt usually identifies the user and the system hostname. You can type commands at the system prompt (press the Enter key after each command to make it run). The system then returns output based on your command to the same Terminal.

Try typing ls in the Terminal.

This command tells Linux to print a list of the current directory contents. We will get back later to basic Linux commands, which you should learn to use BlueBEAR for neuroimaging data analysis.

Why bother with Linux?

You may wonder why you should invest the time to learn the names of the various commands needed to copy files, change directories and to do general things such as run image analysis programs via the command line. This may seem rather clunky. However, the commands you learn to run on the command line in a terminal can alternatively be written in a text file. This text file can then be converted to a batch script that can be run on data sets using the BlueBEAR cluster, potentially looping over hundreds or thousands of different analyses, taking many days to run. This is vastly more efficient and far less error prone than using equivalent graphical tools to do the same thing, one at a time.

When you open a new terminal window it opens in a particular directory. By default, this will be your home directory:

/rds/homes/x/xxx

or the Desktop folder in your home directory:

/rds/homes/x/xxx/Desktop(where x is the first letter of your last name and xxx is your University of Birmingham ADF username).

On BlueBEAR files are stored in directories (folders) and arranged into a tree hierarchy.

Examples of directories on BlueBEAR include:

Directory separators on Linux and Windows

/ (forward slash) is the Linux directory separator. Note that this is different from Windows (where the backward slash \\ is the directory separator).

The current directory is always called . (i.e. a single dot).

The directory above the current directory is always called .. (i.e. dot dot)

Your home directory can always be accessed using the shortcut ~ (the tilde symbol). Note that this is the same as /rds/homes/x/xxx.

You need to remember this to use and understand basic Linux Commands.

"},{"location":"workshop1/intro-to-linux/#basic-linux-commands","title":"Basic Linux Commands","text":"

pwd (Print Working Directory)

In a Terminal type pwd followed by the return (enter) key to find out the name of the directory where you are. You are always in a directory and can (usually) move to directories above you or below to subdirectories.

For example if you type pwd in your terminal you will see: /rds/homes/x/xxx (e.g., /rds/homes/c/chechlmy)

cd (Change Directory)

In a Terminal window, type cd followed by the name of a directory to gain access to it. Keep in mind that you are always in a directory and normally are allowed access to any directories hierarchically above or below.

Type in your terminal the examples below:

cd /rds/projects

cd /rds/homes/

cd .. (to change to the directory above using .. shortcut)

To find out where you are now, type pwd:

(answer: /rds)

If the directory is not located above or below the current directory, then it is often less confusing to write out the complete path instead. Try this in your terminal:

cd /rds/homes/x/xxx/Desktop (where x is the first letter of your last name and xxx is your ADF username)

Changing directories with full paths

Note that it does not matter what directory you are in when you execute this command, the directory will always be changed based on the full pathway you specified.

Remember that the tilde symbol ~ is a shortcut for your home directory. Try this:

cd /rds/projects \ncd ~ \npwd\n

You should be now back in your home directory.

ls (List Files)

The ls command (lowercase L, S) allows you to see a summary list of the files and directories located in the current directory. Try this:

cd /rds/projects/c\nls\n

(you should now see a long list of various BEAR RDS projects)

Before moving to the next section, please close your terminal by clicking on \u201cx\u201d in the top right of the Terminal.

cp (Copy files/directories)

The cp command will copy files and/or directories FROM a source TO a destination in the current working directory. This command will create the destination file if it doesn't exist. In some cases, to do that you might need to specify a complete path to a file location.

Here are some examples (please do not type them, they are only examples):

Command Function cp myfile yourfile Basic file copy (in current directory) cp data data_copy Copy a directory (but not sub-directories) cp -r ~fred/data . Recursively copy fred dir to current dir cp ~fred/fredsfile myfile Copy remote file and rename it cp ~fred/* . Copy all files from fred dir to current dir cp ~fred/test* . Copy all files that begin with test e.g. test, test1.txt

In the subsequent workshops we will practice using the cp command. For now, looking at the examples above to understand its usage. There are also some exercises below to check your understanding.

mv, rmdir and mkdir (Moving, removing and making files/directories)

The mv command will move files FROM a source TO a destination. It works like copy, except the file is actually moved. If applied to a single file, this effectively changes the name of the file. (Note there is no separate renaming command in Linux). The command also works on directories.

Here are some examples (again please do not type these in):

Command Function mv myfile yourfile renames file mv ~/data/somefile somefile moves file mv ~/data/somefile yourfile moves and renames mv ~/data/* . moves multiple files

There are also the mkdir and rmdir commands:

You can try these two commands. Open a new Terminal and type:

mkdir testdir\nls\n

In your home directory you will see now a new directory testdir. Now type:

rmdir testdir\nls\n

You should notice that the testdir has been removed from your home directory.

To remove a file you can use the rm command. Note that once files are deleted at the command line prompt in a terminal window, unlike in Microsoft Windows, you cannot get files back from the wastebin.

e.g. rm junk.txt (this is just an example, do not type it in your terminal)

Clearing your terminal

Often when running many commands, your terminal will be full and difficult to understand. To clear the terminal screen type clear. This is an especially helpful command when you have been typing lots of commands and need a clean terminal to help you focus.

Linux commands in general

Note that most commands in Linux have a similar syntax: command name [modifiers/options] input output

The syntax of the command is very important. There must be spaces in between the different parts of the command. You need to specify input and output. The modifiers (in brackets) are optional and may or may not be needed depending on what you want to achieve.

For example, take the following command:

cp -r /rds/projects/f/fred/data ~/tmp (This is an example, do not type this)

In the above example -r is an option meaning 'recursive' often used with cp and other commands, used in this case to copy a directory including all its content from one directory to another directory.

"},{"location":"workshop1/intro-to-linux/#opening-fsl-on-the-bluebear-gui","title":"Opening FSL on the BlueBEAR GUI","text":"

FSL (FMRIB Software Library) is a software library containing multiple tools for processing, statistical analyses, and visualisation of magnetic resonance imaging (MRI) data. Subsequent workshops will cover usage of some of the FSL tools for structural, functional and diffusion MRI data. This workshop only covers how to start FSL app on BlueBEAR GUI Linux desktop, and some practical aspects of using FSL, specifically running it in the terminal either in the foreground or in the background.

There are several different versions of FSL software available on BlueBEAR. You can search which versions of FSL are available on BlueBEAR as well as all other available software using the following link: https://bear-apps.bham.ac.uk

From there you will also find information how to load different software. Below you will find an example of loading one of the available versions of FSL.

To open FSL in terminal, you first need to load the FSL module. To do this, you need to type in the Terminal a specific command.

First, either close the Terminal you have been previously using and open a new one, or simply clean it. Next, type:

module load FSL/6.0.5.1-foss-2021a

You will see various processes running the terminal. Once these have stopped and you see a system prompt in the terminal, type:

fsl

This fsl command will initiate the FSL GUI as shown below.

Now try typing ls in the same terminal window and pressing return.

Notice how nothing appears to happen (your keystrokes are shown as being typed in but no actual event seems to be actioned). Indeed, nothing you type is being processed and the commands are being ignored. That is because the fsl command is running in the foreground in the terminal window. Because of that it is blocking other commands from being run in the same terminal.

Notice now that control has been returned to the Terminal and how commands you type are now being acted on. Try typing ls again; it should now work in the Terminal.

Notice that your new commands are now being processed. The fsl command is now running in the background in the Terminal allowing you to run other commands in parallel from the same Terminal. Typing the & after any command makes it run in the background and keeps the Terminal free for you to use.

Sometimes you may forget to type & after a command.

You should get a message like \u201c[1]+ Stopped fsl\u201d. You will notice that the FSL GUI is now unresponsive (try clicking on some of the buttons). The fsl process has been suspended.

You should find the FSL GUI is now responsive again and input to the terminal now works once more. If you clicked the 'Exit' button when the FSL GUI was unresponsive, FSL might close now.

Running and killing commands in the terminal

If, for some reason, you want to make the command run in the foreground then rather than typing bg (for background) instead type fg (for foreground). If you want to kill (rather than suspend) a command that was running in the foreground, press CTRL-c (CTRL key and c key).

Linux: some final useful tips

TIP 1:

When typing a command - or the name of a directory or file - you never need to type everything out. The terminal will self-complete the command or file name if you type the TAB key as you go along. Try using TAB key when typing commands or complete path to specific directory.

TIP 2:

If you need help understanding what the options are, or how to use a command, try adding --help to the end of your command. For example, for better understanding of the du options, type:

du --help [enter]

TIP 3:

There are many useful online lists of these various commands, for example: www.ss64.com/bash

Exercise: Basic Linux commands

Please complete the following exercises, you should hopefully know which Linux commands to use!

If unsure, check your results with someone else or ask for help!

The correct commands are provided below. (click to reveal)

Linux Commands Exercise (Answers)
  1. clear

  2. cd ~ or cd /rds/homes/x/xxx

  3. pwd

  4. mkdir test

  5. mv test test1 mkdir test2

  6. cp -r test1 /rds/projects/c/chechlmy-chbh-mricn/xxx/ or mv test1 /rds/projects/c/chechlmy-chbh-mricn/xxx/

  7. rm -r test1 test2 ls

Workshop 1: Further Reading and Reference Material

Here are some additional resources that introduce users to Linux:

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 01 workshop materials.

"},{"location":"workshop1/workshop1-intro/","title":"Workshop 1 - Introduction to BlueBEAR and Linux","text":"

Welcome to the first workshop of the MRICN course!

In this workshop we will introduce you to the environment where will run all of our analyses: the BlueBEAR Portal. Whilst you can run neuroimaging analyses on your own device, the various software and high computing resources needed necessitates a specifically-curated environment. At the CHBH we use BlueBEAR, which satisfies both of these needs. You will also be introduced to the Linux operating system (OS), which is the OS supported on the BlueBEAR Portal through Ubuntu. Linux OS is similar to other operating systems such as Mac OS and Windows, and can similarly be navigated by using terminal commands. Learning how to do so is key when working with data on Linux, and - as you will see in future workshops - is particularly useful when creating and running scripts for more complex data analyses.

Overview of Workshop 1

Topics for this workshop include:

Pre-requisites for the workshop

Please ensure that you have completed the 'Setting Up' section of this course, as you will require access to the BEAR Portal for this workshop.

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 01 workshop materials.

"},{"location":"workshop2/mri-data-formats/","title":"Working with MRI Data - Files and Formats","text":"MRI Image Fundamentals

When you acquire an MRI image of the brain, in most cases it is either a 3D image i.e., a volume acquired at one single timepoint (e.g., T1-weighted, FLAIR scans) or a 4D multi-volume image acquired as a timeseries (e.g., fMRI scans). Each 3D volume consists of multiple 2D slices, which are individual images.

The volume consists of 3D voxels, with a typical size between 0.25 to 4mm, but not necessarily same in all three directions. For example, you can have voxel size [1mm x 1mm x 1mm] or [0.5mm x 0.5mm x 2mm]. The voxel size represents image resolution.

The final important feature of an MRI image is field of view (FOV), a matrix of voxels represented as the voxel size multiplied by number of voxels. It provides information about the coverage of the brain in your MRI image. The FOV is sometime provided for the entire 3D volume or the individual 2D slice. Sometimes, the FOV is defined based on slice thickness and number of acquired slices.

Image and standard space

When you acquire MRI images of the brain, you will find that these images will be different in terms of head position, image resolution and FOV, depending on the sequence and data type (e.g., T1 anatomical, diffusion MRI, fMRI). We often use term \u201cimage space\u201d to depict these differences i.e., structural (T1), diffusion or functional space.

In addition, we also use term \"standard space\" to represent standard dimensions and coordinates of the template brain, which are used when reporting results of group analyses. Our brains differ in terms of size and shape and thus for the purpose of our analyses (both single-subject and group-level) we need to use standard space. The most common brain template is the MNI152 brain (an average of 152 healthy brains).

The process of alignment between different image spaces is called registration or normalization, and its purpose is to make sure that voxel and anatomical locations correspond to the same parts of the brain for each image type and/or participant.

"},{"location":"workshop2/mri-data-formats/#mri-data-formats","title":"MRI Data Formats","text":"

MRI scanners collect MRI data in an internal format that is unique to the scanner manufacturer, e.g., Philips, Siemens or GE. The manufacturer then allows you to export the data into a more usable intermediate format. We often refer to this intermediate format as raw data as it is not directly usable and needs to be converted before being accessible to most neuroimaging software packages.

The most common format used by various scanner manufacturers is the DICOM format. DICOM images corresponding to a single scan (e.g., a T1-weighted scan) might be one large file or multiple files (1 per each volume or one per each slice acquired). This will depend on the scanner and data server used to retrieve/export data from the scanner. There are other data formats e.g., PAR/REC that are specific to Philips scanners. The raw data needs to be converted into a format that the analysis packages can use.

Retrieving MRI data at the CHBH

At CHBH we have a Siemens 3T PRISMA scanner. When you acquire MRI scans at CHBH, data is pushed directly to a data server in the DICOM format. This should be automatic for all research scans. In addition, for most scans, this data is also directly converted to NIfTI format. So, at the CHBH you will likely retrieve MRI data from the scanner in NIfTI format.

NIfTI (Neuroimaging Informatics Technology Initiative) is the most widely used format for MRI data, accessible by majority of the neuroimaging software packages e.g., FSL or SPM. Another older data format which is still sometimes used, is Analyze (with each image consisting of two files .img and .hdr).

NIfTI format files have either the extension .nii or .nii.gz (compressed .nii file), where there is only one NIfTI image file per scan. DICOM files usually have a suffix of .dcm, although these files might be additionally compressed with gzip as .dcm.gz files.

"},{"location":"workshop2/mri-data-formats/#working-with-mri-data","title":"Working with MRI Data","text":"

We will now ourselves convert some DICOM images to NIfTI, using some data collected at the CHBH.

Servers do not always provide MRI data as NIfTIs

While at CHBH you can download the MRI data in NIfTI format, this might not be the case at some other neuroimaging centres. Thus, you should learn how to do it yourself.

The data is located in /rds/projects/c/chechlmy-chbh-mricn/module_data/CHBH.

First, log in into the BlueBEAR Portal and start a BlueBEAR GUI session (2 hours). Open a new terminal window and navigate to your MRICN project folder:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx [where XXX=your ADF username]

Next copy the data from CHBH scanning sessions:

cp -r /rds/projects/c/chechlmy-chbh-mricn/module_data/CHBH .\npwd\n

After typing pwd, the terminal should show /rds/projects/c/chechlmy-chbh-mricn/xxx (i.e., you should be inside your MRICN project folder).

Then type:

cd CHBH \nls\n

You should see data from 3 scanning sessions. Note that there are two files per scan session. One is labelled XXX_dicom.zip. This contains the DICOM files of all data from the scan session. The other file is labelled XXX_nifti.zip. This contains the NIFTI files of the same data, converted from DICOM.

In general, both DICOM and NifTI data should be always copied from the server and saved by the researcher after each scan session. The DICOM file is needed in case there are problems with the automatic conversion to NIfTI. However, most of the time the only file you will need to work with is the XXX_nifti.zip file containing NIfTI versions of the data.

We will now unpack some of the data to explore the data structure. In your terminal, type:

unzip 20191008#C4E7_nifti.zip\ncd 20191008#C4E7_nifti\nls\n

You should see six files listed as below, corresponding to 3 scans (two fMRI scans and one structural scan):

2.5mm_2000_fMRI_v1_6.json \n2.5mm_2000_fMRI_v1_6.nii.gz \n2.5mm_2000_fMRI_v1_7.json \n2.5mm_2000_fMRI_v1_7.nii.gz \nT1_vol_v1_5.json \nT1_vol_v1_5.nii.gz \n

JSON files

You may have noticed that for each scan file (NifTI file, .nii.gz), there is also an autogenerated .json file. This is an information file (in an open standard format) that contains important information for our data analysis. For example, the 2.5mm_2000_fMRI_v1_6.json file contains slice timing information about the exact point in time during the 2s TR (repetition time) when each slice is acquired, which can be used later in the fMRI pre-processing. We will come back to this later in the course.

For now, let's look at another dataset. In your terminal type:

cd ..\nunzip 20221206#C547_nifti.zip\ncd 20221206#C547_nifti\nls\n

You should now see a list of 10 files, corresponding to 3 scans (two diffusion MRI scans and one structural scan). For each diffusion scan, in addition to the .nii.gz and .json files, there are two additional files, .bval and .bvec that contain important information about gradient strength and gradient directions (as mentioned in the MRI physics lecture). These two files are also needed for later analysis (of diffusion MRI data).

We will now look at a method for converting data from the DICOM format to NIfTI.

cd ..\nunzip 20191008#C4E7_dicom.zip\ncd 20191008#C4E7_dicom\nls\n

You should see a list of 7 sub-directories. Each top level DICOM directory contains sub-directories with each individual scan sequence. The structure of DICOM directories can vary depending on how it is stored/exported on different systems. The 7 sub-directories here contain data for four localizer scans/planning scans, two fMRI scans and one structural scan. Each sub-directory also contains several .dcm files.

There are several software packages which can be used to convert DICOM to NIfTI, but dcm2niix is the most widely used. It is available as standalone software, or part of MRIcroGL, a popular tool for brain visualization similar to FSLeyes. dcm2niix is available on BlueBEAR, but to use it you need to load it first using the terminal.

To do this, in the terminal type:

module load bear-apps/2022b

Wait for the apps to load and then type:

module load dcm2niix/1.0.20230411-GCCcore-12.2.0

To convert the .dcm files in one of the sub-directories to NIfTI using dcm2niix from terminal, type:

dcm2niix T1_vol_v1_5

If you now check the T1_vol_v1_5 sub-directory, you should find there a single .nii file and a .json file.

Converting more MRI data

Now try to convert to NIfTI the .dcm files from the scanning session 20221206#C547 with 3 DICOM sub-directories, the two diffusion scans diff_AP and diff_PA and one structural scan MPRAGE.

To do this, you will first need to change the current directory, unzip, change the directory again and then run the dcm2niix command as above.

If you have done it correctly you will find .nii and .json files generated in the structural sub-directories, and in the diffusion sub-directories you will also find .bval and .bvec files.

Now that we have our MRI data in the correct format, we will take a look at the brain images themselves using FSLeyes.

"},{"location":"workshop2/visualizing-mri-data/","title":"MRI data visualization with FSLeyes","text":"

FSL (FMRIB Software Library) is a comprehensive neuroimaging software library for the analysis of structural and functional MRI data. FSL is widely used, freely available, runs on both Linux and Mac OS as well as on Windows via a Virtual Machine.

FSLeyes is the FSL viewer for 3D and 4D data. FSLeyes is available on BlueBEAR, but you need to load it first. You can just load FSLeyes as a standalone software, but as it is often used with other FSL tools, you often want to load both (FSL and FSLeyes).

In this session we will only be loading FSLeyes by itself, and not with FSL.

FSL Wiki

Remember that the FSL Wiki is an important source for all things FSL!

"},{"location":"workshop2/visualizing-mri-data/#getting-started-with-fsleyes","title":"Getting started with FSLeyes","text":"

Assuming that you have started directly from the previous page, first close your previous terminal (to close dcm2nii). Then open a new terminal and to navigate to the correct folder, type in your terminal:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/CHBH

To open FSLeyes, type:

module load FSL/6.0.5.1-foss-2021a-fslpython

There are different versions of FSL on BlueBEAR, however this is the one which you need to use it together with FSLeyes.

Wait for FSL to load and then type:

module load FSLeyes/1.3.3-foss-2021a

Again, wait for FSLeyes to load (it may take a few minutes). After this, to open FSLeyes, type in your terminal:

fsleyes &

The importance of '&'

Why do we type fsleyes & instead of fsleyes?

You should then see the setup below, which is the default FSLeyes viewer without an image loaded.

You can now load/open an image to view. Click 'File' \u2192 'Add from file' (and then select the file in your directory e.g., rds/projects/c/chechlmy-chbh-mricn/xxx/CHBH/visualization/T1.nii).

You can also type directly in the terminal:

fsleyes file.nii.gz

where you replace file.nii.gz with the name of the actual file you want to open.

However, you will need to include the full path to the file if you are not in the same directory when you open the terminal window e.g. fsleyes rds/projects/c/chechlmy-chbh-mricn/xxx/CHBH/visualization/T1.nii

You should now see a T1 scan loaded in ortho view with three canvases corresponding to the sagittal, coronal, and axial planes.

Please now explore the various settings in the ortho view panel:

Also notice the abbreviations on the three canvases:

FSL comes with a collection of\u00a0NIFTI standard templates, which are used for image registration and normalisation (part of MRI data analysis). You can also load these templates in FSLeyes.

To load a template, Click 'File' \u2192 'Add Standard' (for example select the file named MNI152_T1_2mm.nii.gz. If you still have the T1.nii image open, first close this image (by selecting 'Overlay' \u2192 'Remove') and then load the template.

The image below depicts the various tools that you can use on FSLeyes, give them a go!

We will now look at fMRI data. First close the previous image ('Overlay' \u2192 'Remove') and then load the fMRI image. To do this, click 'File' \u2192 'Add from file' and then select the file rds/projects/c/chechlmy-chbhmricn/xxx/CHBH/visualization2.5mm_2000_fMRI.nii.gz.

Your window should now look like this:

Remember this fMRI data file is a 4D image \u2013 a set of 90-odd volumes representing a timeseries. To cycle through volumes, use the up/down buttons or type in a volume in the 'Volume' box to step through several volumes.

Now try playing the 4D file in 'Movie' mode by clicking this button. You should see some slight head movement over time. Click the button again to stop the movie.

As the fMRI data is 4D, this means that every voxel in the 3D-brain has a timecourse associated with it. Let's now have a look at this.

Keeping the same dataset open (2.5mm_2000_fMRI.nii.gz) and now in the FSLeyes menu, select 'View' \u2192 'Time series'.

FSLeyes should now look like the picture below.

What exactly are we looking at?

The functional image displayed here is the data straight from the scanner, i.e., raw, un-preprocessed data that has not been analyzed. In later workshops we will learn how to view analyzed data e.g., display statistical maps etc.

You should see a timeseries shown at the bottom of the screen corresponding to the voxel that is selected in the main viewer. Move the mouse to select other voxels to investigate how variable the timecourse is.

Within the timeseries window, hit the '+' button to show the 'Plot List' characteristics for this timeseries.

Compare the timeseries in different parts of the brain, just outside the brain (skull and scalp), and in the airspace outside the skull. You should observe that these have very different mean intensities.

The timeseries of multiple different voxels can be compared using the '+' button. Hit '+' and then select a new voxel. Characteristics of the timeseries such as plotting colour can also be changed using the buttons on the lower left of the interface.

"},{"location":"workshop2/visualizing-mri-data/#atlas-tools","title":"Atlas tools","text":"

FSL comes not only with a collection of\u00a0NIFTI standard templates but also with several built-in atlases, both probabilistic and histological (anatomical), comprising cortical, sub-cortical, and white matter parcellations. You can explore the full list of included atlases here.

We will now have a look at some of these atlases.

Firstly, close all open files in FSLeyes (or close FSLeyes altogether and start it up again in your terminal by running fsleyes &).

In the FSLeyes menu, select 'File' \u2192 'Add Standard' and then choose the file called MNI152_T1_2mm.nii.gz (this is a template brain in MNI space).

The MNI152 atlas

Remember that the MNI152 atlas is a standard brain template created by averaging 152 MRI scans of healthy adults widely used as a reference space in neuroimaging research.

Now select from the menu 'Settings' \u2192 'Ortho View 1' and tick the box for 'Atlases' at the bottom.

You should now see the 'Atlases' panel open as shown below.

The 'Atlases' panel is organized into three sections:

The 'Atlas information' tab provides information about the current display location, relative to one or more atlases selected in this tab. We will soon see how to use this information.

The 'Atlas search' tab can be used to search for specific regions by browsing through the atlases. We will later look how to use this tab to create region-of-interest (ROI) masks.

The 'Atlas management' tab can be used to add or delete atlases. This is an advanced feature, and we will not be using it during our workshops.

We will now have a look at how to work with FSL atlases. First we need to choose some atlases to reference. In the 'Atlases' \u2192 'Atlas Information' window (bottom of screen in middle panel) make sure the following are ticked:

Now let's select a point in the standard brain. Move the cursor to the voxel position: [x=56, y=61, z=27] or enter the voxel location in the 'Location' window (2nd column).

MNI Co-ordinate Equivalent

Note that the equivalent MNI coordinates (shown in the 1st column/Location window) are [-22,-4,-18].

It may not be immediately obvious what part of the brain you are looking at. Look at the 'Atlases' window. The report should say something like:

Harvard-Oxford Cortical Structural Atlas \nHarvard-Oxford Subcortical Structural Atlas \n98% Left Amygdala\n

Checking the brain region with other atlases

What do the Juelich Histological Atlas & Talairach Daemon Labels report?

The Harvard-Oxford and Juelich are both probabilistic atlases. They report the percentage likelihood that the area named matches the point where the cursor is.

The Talairach Atlas is a simpler labelling atlas. It is based on a single brain (of a 60-year-old French woman) and is an example of a deterministic atlas. it reports the name of the nearest label to the cursor coordinates.

From the previous reports, particularly the Harvard-Oxford Subcortical Atlas and the Juelich Atlas, it should be obvious that we are most likely in the left amygdala.

Now click the '(Show/Hide)' link after the Left Amygdala result (as shown below):

This shows the (max) volume that the probabilistic Harvard-Oxford Subcortical Atlas has encoded for the Left Amygdala. The cursor is right in the middle of this volume.

In the 'Overlay list' click and select the top amygdala overlay. You will note that the min/max ranges are set to 0 and 100. If it\u2019s not, change it to 0 and 100. These reflect the % likelihood of the labelling being correct.

If you increase the min value from 0% to 50%, then you will see the size of the probability volume for the left amygdala will decrease.

It now shows only the volume where there is a 50% or greater probability that this label is correct.

Click the (Show/Hide) link after the Left Amygdala; the amygdala overlay will disappear.

Exercise: Coordinate Localization

Have a go at localizing exactly what the appropriate label is for these coordinates:

If unsure check your results with someone else, or ask for help!

Before continuing

Make sure all overlays are closed (but keep the MNI152_T1_2mm.nii.gz open) before moving to the next section.

"},{"location":"workshop2/visualizing-mri-data/#using-atlas-tools-to-find-a-brain-structure","title":"Using atlas tools to find a brain structure","text":"

It is often helpful to locate where a specific structure is in the brain and to visually assess its size and extent.

Let's suppose we want to visualize where Heschl's Gyrus is. In the bottom 'Atlases' window, click on the second tab ('Atlas search').

In the Search box, start typing the word 'Heschl\u2026'. You should find that the system quickly locates an entry for Heschl's Gyrus in the Harvard-Oxford Cortical Atlas. Click on it to select.

Now if you now the tick box immediately below next to the Heschl's Gyrus, an overlay will be added to the 'Overlay' list on the bottom (see below). Heschl's Gyrus should now be visible in the main image viewer.

Now click on the '+' button next to the tick box. This will centre the viewing coordinates to be in the middle of the atlas volume (see below).

Exercise: Atlas visualization

Now try the following exercises for yourself:

You can change the colour of the overlays by selecting the option below:

Other options also exist to help you navigate the brain and recognize the different brain structures and their relative positions.

Make sure you have firstly closed/removed all previous overlays. Now, select the 'Atlas Search' tab in the 'Atlases' window again. This time, in the left panel listing different atlases, tick on the option for only one of the atlases, such as the Harvard-Oxford Cortical Structural Atlas, and make sure all others are unticked.

Now you should see all of the areas covered by the Harvard-Oxford cortical atlas shown on the standard brain. You can click around with the cursor, the labels for the different areas can be seen in the bottom right panel.

In addition to atlases covering various grey matter structures, there are also two white matter atlases: the JHU ICBM-DTI-81 white-matter labels atlas & JHU white-matter tractography atlas. If you tick (select) these atlases as per previous instructions (hint using the 'Atlas search' tab), you will see a list of all included white matter tracts (pathways) as shown below:

"},{"location":"workshop2/visualizing-mri-data/#using-atlas-tools-to-create-a-region-of-interest-mask","title":"Using atlas tools to create a region-of-interest mask","text":"

You can also use atlas tools in FSLeyes to not only locate specific brain structures but also to create masks for ROI (region-of-interest) analysis. We will now create ROI masks (one grey matter mask and one white matter) using FSL tools and built-in atlases.

To start, please close 'FSLeyes' entirely, either by clicking 'x' in the right corner of the FSLeyes window or by selecting 'FSLeyes' \u2192 'Close'. Then close your current terminal and open a new terminal window.

Then do the following:

Here are the commands to do this:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/\nmkdir ROImasks\ncd ROImasks\nmodule load FSL/6.0.5.1-foss-2021a-fslpython \nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes & \n

Wait for FSLeyes to load, then:

You should now see the MFG overlay in the overlay list (as below) and have a MFG.nii.gz file in the ROImasks directory. You can check this by typing ls in the terminal.

We will now create a white matter mask. Here are the steps:

You should now see the FM overlay in the overlay list (as below) and also have a FM.nii.gz file in the ROImasks directory.

You now have two \u201cprobabilistic ROI masks\u201d. To use these masks for various analyses, you need to first binarize these images.

Why binarize?

Why do you think we need to binarize the mask first? There are several reasons, but primarily it creates clear boundaries between regions which simplifies our statistical analysis and reduces computation.

To do this, first close FSLeyes. Make sure that you are in the ROImasks directory and check if you have the two masks. If you type pwd in the terminal, you should get the output rds/projects/c/chechlmy-chbh-mricn/xxx/ROImasks (where XXX=your ADF username) and when you type ls, you should see FM.nii.gz and MFG.nii.gz.

To binarize the masks, you can use one of the FSL tools for image manipulation, fslmaths. The basic structure of an fslmaths command is:

fslmaths input image [modifiers/options] output

Type in your terminal:

fslmaths FM.nii.gz -bin FM_binary\nfslmaths MFG.nii.gz -bin MFG_binary\n

This simply takes your ROI mask, binarizes it and saves the binarized mask with the _binary name.

You should now have 4 files in the ROImasks directory.

Now open FSLeyes and examine one of the binary masks you just created. First load a template (Click 'File' \u2192 'Add Standard' \u2192 'MNI152_T1_2mm') and add the binary mask (e.g., Click 'File' \u2192 'Add from file' \u2192 'FM_binary.nii.gz').

You can see the difference between the probabilistic and binarized ROI masks below:

Probabilistic ROI mask

Binary ROI mask

To use ROI masks in your analysis, you might also need to threshold it i.e., to change/restrict the probability of the volume. We previously did this for the amygdala manually (e.g., from 0-100% to 50%-100%). The choice of the threshold might depend on the type of analysis and the type of ROI mask you need to use. The instructions below explain how to threshold and binarize your ROI image in one single step using fslmaths.

Open your terminal and make sure that you are in the ROImasks directory (pwd). To both threshold and binarize the MFG mask, type:

fslmaths MFG.nii.gz -thr 25 -bin MFGthr_binary

(option -thr is used to threshold the image below a specific number, in this case 25 corresponding to 25% probability)

Now let's compare the thresholded and unthresholded MFG binarized masks.

You can see the difference in size between the two below:

Binarized MFG mask

Binarized and thresholded MFG mask

Exercise: Atlases and masks

Have a go at the following exercises:

If unsure, check your results with someone else or ask for help!

Workshop 2: Further Reading and Reference Material

FSLeyes is not the only MRI visualization tool available. Here are some others:

More details of what is available on BEAR at the CHBH can be found at the BEAR Technical Docs website.

"},{"location":"workshop2/workshop2-intro/","title":"Workshop 2 - MRI data formats, data visualization and atlas tools","text":"

Welcome to the second workshop of the MRICN course! Prior lectures introduced you to the basics of the physics and technology behind MRI data acquisition. In this workshop we will explore, MRI image fundamentals, MRI data formats, data visualization and atlas tools.

Overview of Workshop 2

Topics for this workshop include:

You will need this information before you can analyse data, regardless if using structural or functional MRI data.

For the purpose of the module we will be using BlueBEAR. You should remember from Workshop 1, how to access the BlueBEAR Portal and use the BlueBEAR GUI.

You have already been given access to the RDS project, rds/projects/c/chechlmy-chbh-mricn. Inside the module\u2019s RDS project, you will find that you have a folder labelled xxx (xxx = University of Birmingham ADF username).

If you navigate to that folder (rds/projects/c/chechlmy-chbh-mricn/xxx), you will be able to perform the various file operations from there during workshops.

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 02 workshop materials.

"},{"location":"workshop3/diffusion-intro/","title":"Diffusion MRI basics - visualization and preprocessing","text":"

In this workshop and the workshop next week, we will follow some basic steps in the diffusion MRI analysis pipeline below. The instructions here are specific to tools available in FSL, however other neuroimaging software packages can be used to perform similar analyses. You might also recall from lectures that models other than diffusion tensor and methods other than probabilistic tractography are also often used.

FSL diffusion MRI analysis pipeline

First, if you have not already, log in into the BlueBEAR Portal and start a BlueBEAR GUI session (2 hours). You should know how to do it from the previous workshops. Open a new terminal window and navigate to your MRICN project folder:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx [where XXX=your ADF username]

Please check your directory by typing pwd. This should return: /rds/projects/c/chechlmy-chbh-mricn/xxx.

Where has all my data gone?

Before this workshop, any old directories and files from previous workshops have been removed (you will not need it for subsequent workshops and storing unnecessary data would result in exceeding allocated quota). Your XXX directory should therefore be empty.

Next you need to copy over the data for this workshop.

cp -r /rds/projects/c/chechlmy-chbh-mricn/module_data/diffusionMRI/ . (make sure you do not omit spaces and .)

This might take a while, but once it has completed, change into that downloaded directory:

cd diffusionMRI (your XXX subdirectory you should now have the folder diffusionMRI)

Type ls. You should now see three subdirectories/folders (DTIfit, TBSS and tractography). Change into the DTIfit folder:

cd DTIfit

"},{"location":"workshop3/diffusion-intro/#viewing-diffusion-data-using-fsleyes","title":"Viewing diffusion data using FSLeyes","text":"

We will first look at what diffusion images look like and explore text files which contain information about gradient strength and gradient directions.

In your terminal type ls. This should return:

p01/\np02/\n

So, the folder DTIfit contains data from two participants contained within the p01 and p02 folders.

Inside each folder (p01 and p02) you will find a T1 scan, uncorrected diffusion data (blip_up.nii.gz, blip_down.nii.gz) acquired with two opposing PE-directions (AP/blip_up and PA/blip_down) and corresponding bvals (e.g., blip_up.bval) and bvecs (e.g., blip_up.bvec) files.

The number of entries in bvals and bvecs files equals the number of volumes in the diffusion data files.

Finally, inside p01 and p02 there is also subdirectory data with distortion-corrected diffusion images.

We will start with viewing the uncorrected data. Please navigate inside the p01 folder, open FSLeyes and then load one of the uncorrected diffusion images:

cd p01\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes &\n

The image you have loaded is 4D and consists of 64 volumes acquired with different diffusion encoding directions. Some of the volumes are non-diffusion images (b-value = 0), while most are diffusion weighted images. The first volume, which you can see after loading the file, is a non-diffusion weighted image as demonstrated below.

Viewing separate volumes

You can view the separate volumes by changing the number in the Volume box or playing movie mode. Note that the volume count starts from 0. You should also note that there are significant differences in the image intensity between different volumes.

Now go back to volume 0 and - if needed - stop movie mode. In the non-diffusion weighted image, the ventricles containing CSF are bright and the rest of the image is relatively dark. Now change the volume number to 2, which is a diffusion weighted image (with a b-value of approximately 1500).

The intensity of this volume is different. To see anything, please change max. intensity to 400. Now the ventricles are dark and you can see some contrast between different voxels.

Let's view the content of the bvals and bvecs files by using the cat command. In your terminal type:

cat blip_down.bval

The first number is 0. This indicates that indeed the first volume (volume 0) is a non-diffusion weighted image and the third volume (volume 2) is diffusion weighted volume with b=1500. Based on the content of this bval file, you should be able to tell how many diffusion-weighted volumes were acquired and how many without any diffusion weighting (b0 volumes).

Comparing diffusion-weighted volumes

Please compare this with the file you loaded into FSLeyes.

Now type:

cat blip_down.bvec

You should now see 3 separate rows of numbers representing the diffusion encoding directions (3x1 vector for each acquired volume; x,y,z directions) and that for volume 2 the diffusion encoding is represented by the vector [0.578, 0.671, 0.464].

Distortion correction

As explained in the lectures, diffusion imaging suffers from various distortions (susceptibility, eddy-currents and movement induced distortions). These need to be corrected before further analysis. The most most noticeable geometric distortions are susceptibility-induced distortions caused by field inhomogeneities, and so we will have a closer look at these.

All types of distortions need correction during pre-processing steps in diffusion imaging analysis. FSL includes two tools used for distortion correction, topup and eddy. The processing with these two tools is time and computing intensive. Therefore we will not run the distortion correction steps in the workshop but instead explore some principles behind it.

For this, you are given distortion corrected data to conduct further analysis, diffusion tensor fitting and probabilistic tractography.

First close the current image in FSLeyes ('Overlay' \u2192 'Remove') and load both uncorrected images (blip_up.nii.gz, blip_down.nii.gz) acquired with two opposing PE-directions (PE=phase encoding).

Compare the first volumes in each file. To do that you can either toggle the visibility on and off (click the eye icon) or use the 'Opacity' button (you should remember from the previous workshop how to do this).

The circled area indicates the differences in susceptibility-induced distortions between the two images acquired with two opposing PE-directions.

Now change the max. intensity to 400 and compare the third volumes in each file. Again, the circled area indicate the differences in distortions between the two images acquired with the two opposing PE-directions.

Finally, we will look at distortion corrected data. First close the current image ('Overlay' \u2192 'Remove').

Now in FSLeyes load data.nii.gz (the distortion-corrected diffusion image located inside the data subdirectory) and have a look at one of the the non-diffusion weighted and diffusion-weighted volumes.

Comparing corrected to uncorrected diffusion-weighted volumes

Can you tell the difference in the corrected compared to the uncorrected diffusion images?

Further examining the difference between uncorrected and corrected diffusion data

In your own time (outside of this workshop as part of independent study), load both the corrected and uncorrected data for p01 and compare using the 'Volume' box or 'Movie' mode. Also explore the data in p02 folder using the instructions above.

"},{"location":"workshop3/diffusion-intro/#creating-a-binary-mask-using-fsls-brain-extraction-tool","title":"Creating a binary mask using FSL's Brain Extraction Tool","text":"

In the next part of the workshop, we will look FSL's Brain Extraction Tool (BET).

Brain extraction is a necessary pre-processing step, which removes non-brain tissue from the image. It is applied to structural images prior to tissue segmentation and is needed to prepare anatomical scans for registration of functional MRI or diffusion scans to MNI space. BET can be also used to create binary brain masks (e.g., brain masks needed to run diffusion tensor fitting, DTIfit).

In this workshop we will look at only at creating a binary brain mask as required for DTIfit. In subsequent workshops we will look at using BET for removing non-brain tissues from diffusion and T1 scans (\u201cskull-stripping\u201d) in preparation for registration.

First close FSLeyes and to make sure you do not have any processes running in the background, close your current terminal.

Open a new terminal window, navigate to the p02 subdirectory, and load FSL and FSLeyes again:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p02\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a \n

Now check the content of the p02 subdirectory by typing ls. You should get the response bvals, bvecs and data.nii.gz.

From the data.nii.gz (distortion corrected diffusion 4D image) we will extract a single volume without diffusion weighting (e.g. the first volume). You can extract it using one of FSL's utility commands, fslroi.

What is fslroi used for?

fslroiis used to extract a region of interest (ROI) or subset of data from a larger 3D or 4D image file.

In the terminal, type:

fslroi data.nii.gz nodif 0 1

where:

You should have a new file nodif.nii.gz (type ls to confirm) and can now create a binary brain mask using BET.

To do this, first open BET in terminal. You can open the BET GUI directly in a terminal window by typing:

Bet &

Or by runnning FSL in a terminal window and accessing BET from the FSL GUI. To do it this way, type:

fsl &

and then open the 'BET brain extraction tool' by clicking on it in the GUI.

In either case, once BET is opened, click on advanced options and make sure the first two outputs are selected ('brain extracted image' and 'binary brain mask') as below. Select as the 'Input' image the previously created nodif.nii.gz and change 'Fractional Intensity Threshold' to 0.4. Then click the 'Go' button.

Completing BET in the terminal

After running BET you may need to hit return to get a visible prompt back after seeing \"Finished\u201d in the terminal!

You will see 'Finished' in the terminal when you are ready to inspect the results. Close BET and open FSLeyes and load three files (nodif.nii.gz, nodif_brain.nii.gz and nodif_brain_mask). Compare the files. To do that you can either toggle the visibility on and off (click the eye icon) or use 'Opacity button' (you should remember from previous workshop how to do it).

The nodif_brain_mask is a single binarized image with ones inside the brain and zeroes outside the brain. You need this image both for DTIfit and tractography.

Comparing between BET and normal images

Can you tell the difference between nodif.nii.gz and nodif_brain.nii.gz? It might be easier to compare these images if you change max intensity to 1500 and nodif_brain colour to green.

"},{"location":"workshop3/diffusion-mri-analysis/","title":"Diffusion tensor fitting and Tract-Based Spatial Statistics","text":""},{"location":"workshop3/diffusion-mri-analysis/#diffusion-tensor-fitting-dtifit","title":"Diffusion tensor fitting (DTIfit)","text":"

The next thing we will do is to look at how to run and examine the results of diffusion tensor fitting.

First close FSLeyes, and to make sure you do not have any processes running in the background, close the current terminal.

Open a new terminal window, navigate to the p01 subdirectory, load FSL and FSLeyes again, and finally open FSL (with & to background it):

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p01\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a\nfsl & \n

To run the diffusion tensor fit, you need 4 files as specified below:

  1. Distortion corrected diffusion data: data.nii.gz
  2. Binary brain mask: nodif_brain_mask.nii.gz
  3. Gradient directions: bvecs (test file with gradient directions)
  4. b-values: bvals (text file with list of b-values)

All these files are included inside the data subdirectory p01/data. You will later learn how to create a binary brain mask but first we will run DTIfit.

In the FSL GUI, first click on 'FDT diffusion', and in the FDT window, select 'DTIFIT Reconstruct diffusion tensors'. Now choose as 'Input directory' the data subdirectory located inside p01 and click 'Go'.

You should see something happening in the terminal and once you see 'Done!' you are ready to view the results.

Click 'OK' when the message appears.

Different ways of running DTIfit

Instead of running DTIfit by choosing the 'Input' directory, you can also run it by specifying the input file manually. If you click it now, the files would be auto-filled but otherwise you would need to provide inputs as below.

Running DTIfit in your own time

Please do NOT run it now, but instead try it in your own time with data in the p02 folder.

Finally, you can also run DTIfit directly from the terminal. To do this, you would need to type dtifit in the terminal and choose the dtifit compulsory arguments:

Argument Description -k, --data dti data file -o, --out Output basename -m, --mask Bet binary mask file -r, --bvecs b vectors file -b, --bvals b values file

To run DTIfit from the terminal, you would need to navigate inside the subdirectory/folder with all the data and type the full dtifit command, specifying compulsory arguments as below:

dtifit --data=data --mask=nodif_brain_mask --bvecs=bvecs --bvals=bvals --out=dti

This command only works when running it from inside a folder where all the data is located, otherwise you will need to specify the full path with the data location. This would be useful if you want to write a script; we will look at it in the later workshops.

Running DTIfit from the terminal in your own time

Again, please do NOT run it now but try it in your own time with data in the p02 folder.

The results of running DTIfit are several output files as specified below. We will look closer at the highlighted files in bold.

All of these files should be located in the data subdirectory, i.e. within /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p01/data/.

Output File Description dti_V1 (V2, V3) 1st, 2nd, 3rd eigenvectors dti_L1 (L2, L3) 1st, 2nd, 3rd eigenvalues dti_FA Fractional Anisotropy map dti_MD Mean Diffusivity map dti_MO Mode of anisotropy (linear versus planar) dti_SO Raw T2 signal with no diffusion weighting

To do this, firstly close the FSL GUI, open FSLeyes and load the FA map ('File' \u2192 'Add from file' \u2192 dti_FA)

Next add the principal eigenvector map (dti_V1) to your display ('File' \u2192 'Add from file' \u2192 dti_V1).

FSLeyes will open the image dti_V1 as a 3-direction vector image (RGB) with diffusion direction coded by colour. To display the standard PDD colour coded orientation map (as below), you need to modulate the colour intensity with the FA map so that the anisotropic voxels appear bright.

In the display panel (click on 'Settings' (the Cog icon)) and change 'Modulate' by setting it to dti_FA.

Finally, compare the FA and MD maps (dti_FA and dti_MD). To do this, load the FA map and add the MD map. By contrast to the FA map, the MD map appears uniform in both gray and white matter, plus higher intensities are in the CSF-filled ventricles and indicate higher diffusivity. This is opposed to dark ventricles in the FA map.

Differences between the FA and MD maps

Why are there such differences?

"},{"location":"workshop3/diffusion-mri-analysis/#tract-based-spatial-statistics-tbss","title":"Tract-Based Spatial Statistics (TBSS)Tract-Based Spatial Statistics analysis pipeline","text":"

In the next part of the workshop, we will look at running TBSS, Tract-Based Spatial Statistics.

TBSS is used for a whole brain \u201cvoxelwise\u201d cross-subject analysis of diffusion-derived measures, usually FA (fractional anisotropy).

We will look at an example TBSS analysis of a small dataset consisting of FA maps from ten younger (y1-y10) and five older (o1-o5) participants. Specifically, you will learn how to run the second stage of TBSS analysis, \u201cvoxelwise\u201d statistics, and learn how to display results using FSLeyes. The statistical analysis that you will run aims to examine where on the tract skeleton younger versus older (two groups) participants have significantly different FA values.

Before that, let's shortly recap TBSS as it was covered in the lecture.

The steps for Tract-Based Spatial Statistics are:

  1. Fitting the diffusion tensor (DTIfit)
  2. Alignment of all study participants\u2019 FA maps to standard space using non-linear registration
  3. Merging all participants\u2019 nonlinearly aligned FA maps into a single 4D image file and creating the mean FA image
  4. FA \u201cskeletonization\u201d (the mean FA skeleton representing the centres of major tracts specific to all participants is created)
  5. Each participant\u2019s aligned FA map is then projected back onto the skeleton prior to statistical analysis
  6. Hypothesis testing (voxelwise statistics)

To save time, some of the pre-processing stages including generating FA maps (tensor fitting), preparing data for analysis, registration of FA maps and skeletonization have been run for you and all outputs are included in the data folder you have copied at the start of this workshop.

You will only run the TBSS statistical analysis to explore group differences in FA values based upon age (younger versus older participants).

First close FSLeyes (if you still have it open) and make sure that you do not have any processes running in the background by closing your current terminal.

Then open a new terminal window, navigate to the subdirectory where pre-processed data are located and load both FSL and FSLeyes:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/TBSS/TBSS_analysis_p2/\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a \n

Once you have loaded all the required software, we will start with exploring the pre-processed data. If you correctly followed the previous steps, you should be inside the subdirectory TBSS_analysis_p2. Confirm that, and then check the content of that subdirectory by typing:

pwd (answer /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/TBSS/TBSS_analysis_p2/)

ls (you should see 3 data folders listed: FA, origdata, stats)

We need to firstly check if all the pre-processing steps have been run correctly and that we have all the required files.

Navigate inside the stats folder and check the files inside by typing in your terminal:

cd stats\nls\n

You should find inside the files listed below.

Exploring the data

If this is the case, open FSLeyes and explore these files one by one to make sure you understand what each represents. You might need to change the colour to visualise some image files.

Remember to ask for help!

If you are unsure about something, or need help, please ask!

Once you have finished taking a look, close FSLeyes.

Before using the General Linear Model (GLM) GUI to set up the statistical model, you need to determine the order in which participants\u2019 files have been entered into the single 4D skeletonized file (i.e., the data order in the all_FA_skeletonised file). The easiest way to determine the alphabetical order of participants in the the final 4D file (all_FA_skeletonised), is to check in which order FSL lists the pre-processed FA maps inside the FA folder. You can do this in the terminal with the commands below

cd .. \ncd FA \nimglob *_FA.*\n

You should see data from the 5 older (o1-o5) followed by data from the 10 (y1-y10) younger participants.

Next navigate back to the stats folder and open FSL:

cd ..\ncd stats\nfsl &\n

Click on 'Miscellaneous tools' and select 'GLM Setup' to open the GLM GUI.

In the workshop we will set up a simple group analysis (a two sample unpaired t-test).

How to set up more complex models

To find information re how to set up more complex models, using GUI, click on this link: https://fsl.fmrib.ox.ac.uk/fsl/docs/#/statistics/glm

In the 'GLM Setup' window, change 'Timeseries design' to 'Higher-level/non-timeseries design' and '# inputs' to 15.

Then click on 'Wizard' and select 'two groups, unpaired' and set 'Number of subjects in first group' to 5. Then click 'Process'.

In the 'EVs' tab, name 'EV1' and 'EV2' as per your groups (old, young).

In the contrast window set number of contrasts to 2 and re-name them accordingly to the image below:

(C1: old > young, [1 -1]) (C2: young > old, [-1 1])

Click 'View Design', close the image and then go back to the GLM set window and save your design with the filename design. Click 'Exit' and close FSL.

To run the TBSS statistical analysis FSL's randomise tool is used.

FSL's randomise

Randomise is FSL's tool for nonparametric permutation inference on various types of neuroimaging data (statistical analysis tool). For more information click on this link: https://fsl.fmrib.ox.ac.uk/fsl/docs/#/statistics/randomise

The basic command line to use this tool is:

randomise -i <input> -o <input> -d <design.mat> -t <design.con> [options]

You can explore options and the set up by typing randomise in your terminal.

The basic command line to use randomise for TBSS is below:

randomise -i all_FA_skeletonised -o tbss -m mean_FA_skeleton_mask -d design.mat -t design.con -n 500 --T2

Check if you are inside the stats folder and run the command above in terminal to run your TBSS group analysis:

The elements of this command are explained below:

Argument Description -i input image -o output image basename -m mask -d design matrix -t design contrast -n number of permutations --T2 TFCE

Why so few permutations?

To save time we only run 500 permutations; this number will vary depending on the type of analysis, but usually it is between 5,000 to 10,000 or higher.

The output from randomise will include two raw (unthresholded) tstat images, tbss_tstat1 and tbss_tstat2.

The TFCE p-value images (fully corrected for multiple comparisons across space) will be tbss_tfce_corrp_tstat1 and tbss_tfce_corrp_tstat2.

Based on the set up of your design, contrast 1 gives the older > young test and contrast 2 gives the young > older test; the contrast which will likely give significant results is the 2nd contrast i.e., we are expecting higher FA in younger participants (due to the age related decline in FA).

To check that, use FSLeyes to view results of your TBSS analysis. Open FSLeyes, load mean_FA plus the mean_FA_skeleton template and add your display TFCE corrected stats-2 image:

  1. 'File' -> 'Add from file' -> mean_FA.nii.gz
  2. File -> 'Add from file' -> mean_FA_skeleton.nii.gz (change greyscale to green)
  3. File -> 'Add from file' -> tbss_tfce_corrp_tstat2.nii.gz (change greyscale to red-yellow and set up Max to 1, and Min to 0.95 or 0.99)

Please note that TFCE-corrected images, are actually 1-p for convenience of display, so thresholding at 0.95 gives significant clusters at p corrected < 0.05, and 0.99 gives significant clusters at p corrected < 0.01.

You should see the same results as below:

Interpreting the results

Are the results as expected? Why/why not?

Reviewing the tstat1 image

Next review the image tbss_tfce_corrp_tstat1.nii.gz.

Further information on TBSS

More information on TBSS, can be found on the 'TBSS' section of the FSL Wiki: https://fsl.fmrib.ox.ac.uk/fsl/docs/#/diffusion/tbss

"},{"location":"workshop3/workshop3-intro/","title":"Workshop 3 - Basic diffusion MRI analysis","text":"

Welcome to the third workshop of the MRICN course! Prior lectures in the module introduced you to basics of the diffusion MRI and its applications, including data acquisition, the theory behind diffusion tensor imaging and using tractography to study structural connectivity. The aim of the next two workshops is to introduce you to some of the core FSL tools used for diffusion MRI analysis.

Specifically, we will explore different elements of the FMRIB's Diffusion Toolbox (FDT) to walk you through basic steps in diffusion MRI analysis. We will also cover the use of Brain Extraction Tool (BET).

By the end of the two workshops, you should be able to understand the principles of correcting for distortions in diffusion MRI data, how to run and explore results of a diffusion tensor fit, and how to run a whole brain group analysis and probabilistic tractography.

Overview of Workshop 3

Topics for this workshop include:

We will be working with various previously acquired datasets (similar to the data acquired during the CHBH MRI Demonstration/Site visit). We will not go into details as to why and how specific sequence parameters and specific values of the default settings have been chosen. Some values should be clear to you from the lectures or assigned on Canvas readings, please check there, or if you are still unclear, feel free to ask.

Note that for your own projects, you are very likely to want to change some of these settings/parameters depending on your study aims and design.

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 03 workshop materials.

"},{"location":"workshop4/probabilistic-tractography/","title":"Probabilistic Tractography","text":"

In the first part of the workshop, we will look again at BET, FSL's Brain Extraction Tool.

Brain extraction is a necessary pre-processing step which allows us to remove non-brain tissue from the image. It is applied to structural images prior to tissue segmentation and is needed to prepare anatomical scans for the registration of functional MRI or diffusion scans to MNI space. BET can be also used to create binary brain masks (e.g., brain masks needed to run diffusion tensor fitting, DTIfit).

"},{"location":"workshop4/probabilistic-tractography/#skull-stripping-our-data-using-bet","title":"Skull-stripping our data using BET","text":"

In this workshop we will first look at a very simple example of removing non-brain tissues from diffusion and T1 scans (\u201cskull-stripping\u201d) in preparation for the registration of diffusion data to MNI space.

Log into the BlueBEAR Portal and start a BlueBEAR GUI session (2 hours).

In your session, open a new terminal window and navigate to the diffusionMRI data in your MRICN folder:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI [where XXX=your ADF username]

In case you missed the previous workshop

You were instructed to copy the diffusionMRI data in the previous workshop. If you have not completed last week's workshop, you either need to find details on how to copy the data in the 'Workshop 3: Basic diffusion MRI analysis' materials or work with someone who has completed the previous workshop.

Then load FSL and FSLeyes:

module load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a\n

We will now look at how to \u201dskull-strip\u201d the T1 image (remove skull and non-brain areas); this step is needed for the registration step in both fMRI and diffusion MRI analysis pipelines.

We will do this using BET on the command line. The basic command-line version of BET is:

bet <input> <output> [options]

In this workshop we will look at a simple brain extraction i.e., performed without changing any default options.

To do this, navigate inside the p01 folder:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p01

Then in your terminal type:

bet T1.nii.gz T1_brain

Once BET has completed (should only take a few seconds at most), open FSLeyes (with & to background it). Then in FSLeyes:

This will likely show that in this case the default brain extraction was good. The reason behind such a good brain extraction with default options is a small FOV and data from a young healthy adult. This is not always the case e.g., when we have a large FOV or data from older participants.

More brain extraction to come? You BET!

In the next workshop (Workshop 5) we will explore different BET options and how to troubleshoot brain extraction.

"},{"location":"workshop4/probabilistic-tractography/#preparing-our-data-with-bedpostx","title":"Preparing our data with BEDPOSTX","text":"

BEDPOSTX is an FSL tool used for a step in the diffusion MRI analysis pipeline, which prepares the data for probabilistic tractography. BEDPOSTX (Bayesian Estimation of Diffusion Parameters Obtained using Sampling Techniques, X = modelling Crossing Fibres) estimates fibre orientation in each voxel within the brain. BEDPOSTX employs Markov Chain Monte Carlo (MCMC) sampling to reconstruct distributions of diffusion parameters at each voxel.

We will not run\u00a0it\u00a0during this workshop as it takes a long time. The data has been processed for you, and you copied it at the start of the previous workshop.

To run it, you would need to open FSL GUI, click on FDT diffusion and from drop down menu select 'BEDPOSTX'. The input directory must contain the distortion corrected diffusion file (data.nii.gz), binary brain mask (nodif_brain_mask.nii.gz) and two text files with the b-values (bvals) and gradient orientations (bvecs).

In case of the data being used for this workshop with a single b-value, we need to specify the single-shell model.

After the workshop, in your own time, you could run it using the provided data (see Tractography Exercises section at the end of workshop notes).

BEDPOSTX outputs a directory at the same level as the input directory called [inputdir].bedpostX (e.g. data.bedpostX). It contains various files (including mean fibre orientation and diffusion parameter distributions) needed to run probabilistic tractography.

As we will look at tractography in different spaces, we also need the output from registration. The concept of different image spaces has been introduced in Workshop 2. The registration step can be run from the FDT diffusion toolbox after BEDPOSTX has been run. Typically, registration will be run between three spaces:

  1. Diffusion space (nodif_brain image)
  2. Structural space (T1 image for the same participant)
  3. Standard space (the MNI152 template)

This step has been again run for you. To run it, you would need to open FSL GUI, click on 'FDT diffusion' and from the drop down menu select 'Registration'. The main structural image would be your \u201dskull-stripped\u201d T1 (T1_brain) and non-betted structural image would be T1. Plus you need to select data.bedpostX as the 'BEDPOSTX directory'.

After the workshop, you can try running it in your own time (see Tractography Exercises section at the end of workshop notes).

Registration output directory

The outputs from registration needed for probabilistic tractography are stored in the xfms subdirectory.

"},{"location":"workshop4/probabilistic-tractography/#probabilistic-tractography-using-probtrackx","title":"Probabilistic tractography using PROBTRACKX","text":"

PROBTRACKX (probabilistic tracking with crossing fibres) is an FSL tool used for probabilistic tractography. To run it, you need to open FSL GUI, click on FDT diffusion and from the drop down menu select PROBTRACKX (it should default to it).

PROBTRACKX can be used to run tractography either in diffusion or non-diffusion space (e.g., standard or structural). If running it in non-diffusion space you will need to provide a reference image. You can also run tractography from a single seed (voxel), single mask (ROI) or from multiple masks which can be specified in either diffusion or non-diffusion space.

We will look at some examples of different ways of running tractography.

First close any processes still running and open a new terminal. Next navigate inside where all the files to run tractography have been prepared for you:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/tractography/p01

As you may recall, on BlueBEAR there are different versions of FSL available. These correspond to different FSL software releases and have been compiled in a different way. The different versions of FSL are suitable for different purposes i.e., used for different MRI data analyses.

To run BEDPOSTX and PROBTRACKX, you need to use a specific version of FSL (FSL 6.0.7.6), which you can load by typing in your terminal:

module load bear-apps/2022b\nmodule load FSL/6.0.7.6\nsource $FSLDIR/etc/fslconf/fsl.sh\n

Once you have loaded FSL using these commands, open the FDT toolbox from either the FSL GUI or directly typing in your terminal:

Fdt &

We will start with tractography from a single voxel in diffusion space. Specifically, we will run it from a voxel with coordinates [47, 37, 29] located within the forceps major of the corpus callosum, a white matter fibre bundle which connects the occipital lobes.

Running tractography on another voxel

Later, you can use the FA map (dti_FA inside the p01/data folder) loaded to FSLeyes to check the location of the selected voxel, choose another voxel within a different white matter pathway, and run the tractography again.

You should have the FDT Toolbox window open as below:

From here do the following:

  1. Select data.bedpostX as the 'BEDPOSTX directory'
  2. Enter voxel coordinates [47, 37, 29] (X, Y, Z)
  3. Enter output file name 'corpus' - this we be the name of directory that contains the output files
  4. Press Go (you will see something happening in the terminal, once you see window Done!/OK, you are ready to view results. Before proceeding click OK)

After the tractography has finished, check the contents of subdirectory /corpus with the tractography output files. It should contain:

We will explore the results later. First, you will learn how to run tractography in the standard (MNI) space.

Close FDT toolbox and then open it again from the terminal to make sure you don\u2019t have anything running in the background.

We will now run tractography using a combination of masks (ROIs) in standard space to reconstruct tracts connecting the right motor thalamus (portion of the thalamus involved in motor function) with the right primary motor cortex. The ROI masks have been prepared for you and put inside the mask subdirectory ~/diffusionMRI/tractography/masks. The ROIs have been created using FSL's ATLAS tools (you\u2019ve learnt in a previous workshop how to do this) and are in standard/MNI space, thus we will run tractography in MNI (standard) space and not in diffusion space.

This is the most typical design of tractography studies.

In the FDT Toolbox window - before you select your input in the 'Data' tab - go to the 'Options' tab (as below) and reduce the number of samples to 500 under 'Options'. You would normally run 5000 (default) but reducing this number will speed up processing and is useful for exploratory analyses.

Now going back to the 'Data' tab (as below) do the following:

  1. Select data.bedpostX as 'BEDPOSTX directory'
  2. In 'Seed Space', change 'Single voxel' to 'Single mask'
  3. As 'Seed Image' choose Thalamus_motor_RH.nii.gz from the masks subdirectory
  4. Tick both 'Seed space is not diffusion' and 'nonlinear'
  5. You have to use the warp fields between standard space and diffusion space created during registration, which are inside the data.bedpost/xfms directory. Select standard2diff_warp as 'Seed to diff transform' and diff2standard_warp as 'diff to Seed transform'. These files are generated during registration.
  6. As a waypoint mask choose cortex_M1_right.nii.gz from the masks subdirectory to isolate only those tracts that reach from the motor thalamus. Use this mask also as a termination mask to avoid tracking to other parts of the brain.
  7. Enter output file name MotorThalamusM1
  8. Press Go!

Specifying masks

Without selecting the waypoint and termination masks, you would also get other tracts passing via motor thalamus, including random offshoots with low probability (noise). This is expected for probabilistic tractography, as the random sampling without specifying direction can result in spurious offshoots into nearby tracts and give low probability noise.

It will take significantly longer this time to run the tractography in standard space. However, once it has finished, you will see the window 'Done!/OK'. Before proceeding, click 'OK'.

A new subdirectory will be created with the chosen output name MotorThalamusM1. Check the contents of this subdirectory. It contains slightly different files compared to the previous tractography output. The main output, the streamline density map is called fdt_paths.nii.gz. There is also a file called waytotal that contains the total number of valid streamlines runs.

We will now explore the results from both tractography runs. First close FDT and your terminal as we need FSLeyes, which cannot be loaded together with the current version of FSL.

Next navigate inside where all the tractography results have been generated and load/open FSLeyes:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/tractography/p01\nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes &\n

We will start with our results from tractography in seed space. In FSLeyes, do the following:

  1. Load the FA map (~/diffusionMRI/tractography/p01/data/dti_FA.nii.gz) and tractography output file (~/corpus/corpus_47_37_29.nii.gz)
  2. Change the colour of tractography output to\u00a0'Red-Yellow'
  3. Change the 'Min' display thresholds to 50 to remove voxels with low probability of being in the tract. The displayed values denote a number of streamlines running through a voxel. If we use default settings, 5000 samples are generated, thus 50 represents a 1% probability, meaning that the voxels that are not shown when \"Min\" is set to 50 are voxels with a probability (of being part of the tract) of less than 1%.

Your window should look like this:

Once you have finished reviewing results of tractography in see space, close the results ('Overlay \u2192 Remove all').

We will now explore the results from our tractography ran in MNI space, but to do so we need a standard template. Assuming you have closed all previous images:

  1. Load in the MNI template (~/diffusionMRI/tractography/MNI152T1_brain.nii.gz) and tractography output file (/MotorThalamusM1/fdt_paths.nii.gz.)
  2. Change the colour of tractography output to\u00a0'Red-Yellow'
  3. You might want to add/load the ROI masks ('motor thalamus' and 'M1')
  4. Adjust the min and max display thresholds to explore the reconstructed tract. Change the Min display thresholds to 50 to remove voxels with low probability of being in the tract. There is no gold standard for thresholding tractography outputs. It will depend on study design, parameter set up and further analysis.

Tractography exercises

In your own time, you should try the exercises below to consolidate your tractography skills. If you have any problems completing or any further questions, you can ask for help during one of the upcoming workshops.

Help and further information

As always, more information on diffusion analyses in FSL, can be found on the 'diffusion' section of the FSL Wiki and this practical course ran by FMRIB (the creators of FSL).

"},{"location":"workshop4/workshop4-intro/","title":"Workshop 4 - Advanced diffusion MRI analysis","text":"

Welcome to the fourth workshop of the MRICN course!

In the previous workshop we started exploring different elements of the FMRIB's Diffusion Toolbox (FDT). This week we will continue with the different applications of the FDT toolbox and the use of Brain Extraction Tool (BET).

Overview of Workshop 4

Topics for this workshop include:

We will be working with various previously acquired datasets (similar to the data acquired during the CHBH MRI Demonstration/Site visit). We will not go into details as to why and how specific sequence parameters and specific values of the default settings have been chosen. Some values should be clear to you from the lectures or assigned on Canvas readings, please check there, or if you are still unclear, feel free to ask.

Note that for your own projects, you are very likely to want to change some of these settings/parameters depending on your study aims and design.

In this workshop we will follow basic steps in the diffusion MRI analysis pipeline, specifically with running tractography. The instructions here are specific to tools available in FSL. Other neuroimaging software packages can be used to perform similar analyses.

Example of Diffusion MRI analysis pipeline

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 04 workshop materials.

"},{"location":"workshop5/first-level-analysis/","title":"Running the first-level fMRI analysis","text":"

We are now ready to proceed with running our fMRI analysis. We will start with the first dataset (first participant /p01) and our first step will be to skull-strip the data using BET. You should now be able by now to not only run BET but also to troubleshoot poor BET i.e., use different methods to run BET.

The p01 T1 scan was acquired with a large field-of-view (FOV) (you can check this using FSLeyes; it is generally a good practice to explore the data before the start of any analysis, especially if you were not the person who acquired the data). Therefore, we will apply an appropriate method using BET as per the example we explored earlier. This will be likely the right method to be applied to all datasets in the /recon folder but please check.

Open a terminal and use the commands below to skull-strip the T1:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/recon/p01\nmodule load FSL/6.0.5.1-foss-2021a\nmodule load FSLeyes/1.3.3-foss-2021a \nimmv T1 T1neck \nrobustfov -i T1neck -r T1 \nbet T1.nii.gz T1_brain -R \n

Remember that:

It is very important that after running BET that you examine, using FSLeyes, the quality of the brain extraction process performed on each and every T1.

A poor brain extraction will affect the registration of the functional data into MNI space giving a poorer quality of registered image. This in turn will mean that the higher-level analyses (where functional data are combined in MNI space) will be less than optimal. It will then be harder to detect small BOLD signal changes in the group.

Re-doing inaccurate BETs

Whenever the BET process is unsatisfactory you will need to go back and redo the individual BET extraction by hand, by tweaking the \u201cFractional intensity threshold\u201d and/or the advanced option parameters for the centre coordinates\" and/or the \u201cThreshold gradient\u201d.

You should be still inside the /p01 folder; please rename the fMRI scan by typing:

immv fs005a001 fmri1

"},{"location":"workshop5/first-level-analysis/#setting-up-and-running-the-first-level-fmri-analysis-using-feat","title":"Setting up and running the first-level fMRI analysis using FEAT","text":"

We are now ready to proceed with our fMRI data analysis. To do that we will need a different version of FSL installed on BlueBEAR. Close your terminal and again navigate inside the p01 folder:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/recon/p01

Now load FSL using the commands below:

module load bear-apps/2022b\nmodule load FSL/6.0.7.6\nsource $FSLDIR/etc/fslconf/fsl.sh\n

Finally, open FEAT (from the FSL GUI or by typing Feat & in a terminal window).

On the menus, make sure 'First-level analysis' and 'Full analysis' are selected. Now work through the tabs, setting and changing the values for each parameter as described below. Try to understand how these settings relate to the description of the experiment as provided at the start.

Misc Tab

Accept all the defaults.

Data Tab

Input file

The input file is the 4D fMRI data (the functional data for participant 1 should be called something like fmri1.nii.gz if you have renamed it as above). Select this using the 'Select 4D data' button. Note that when you have selected the input, 'Total volumes' should jump from 0.

Total volumes troubleshooting

If \u201cTotal volumes\u201d is still set to 0, or jumps to 1, you have done something wrong. If you get this, stop and fix the error at this point. DO NOT CARRY ON. If \u201cTotal volumes\u201d is still set to 0, that means you have not yet selected any data. Try again. If \u201cTotal volumes\u201d is set to 1, that means you have most likely selected the T1 image, not the fMRI data. Try again, but selecting the correct file.

Check carefully at this point that the total number of volumes is correct (93 volumes were collected on participants 1-2, 94 volumes on participants 3-15).

Output directory

Enter a directory name in the output directory. This needs to be something systematic that you can use for all the participants and which is comprehensible. It needs to make sense to you when you look at it again in a year or more in the future. It is important here to use full path names. It is also very important that you do not use shortened or partial path names and that you do not put any spaces in the filenames you use. If you do, these may cause some programs to crash with errors that may not seem to make much sense.

For example, use an output directory name like:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1

where:

Note that when FEAT is eventually run this will automatically create a new directory called /rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1.feat for you containing the output of this particular analysis. If the directory structure does not exist, FEAT will try and make it. You do not need to make it yourself in advance.

Repetition Time (TR)

For this experiment make sure that the TR is set to 2.0s. If FEAT can read the TR from the header information it will try and set it automatically. If not you will need to set it manually.

High pass filter cutoff

Set 'High pass filter cutoff' to 60sec (i.e. 50% greater than OFF+ON length of time).

Pre-stats

Set the following:

Stats

Set the following:

Select the 'Full model setup' option (depicted on the blue arrow above); and then on the 'EVs' tab:

On the Contrasts Tab:

Check the plot of the design that will be generated and then click on the image to dismiss it.

Post-stats

Change the 'Thresholding' pull down option to be of type 'Uncorrected' and leave the P threshold value at p<0.05.

Thresholding and processing time

Note this is not the correct thresholding that you will want at the final (third stage) of processing (where you will probably want 'Cluster thresholding') but for the convenience of the workshop, at this stage it will speed up the processing per run.

Registration

Set the following:

The model should now be set up with all the correct details and be ready to be analyzed.

Hit the GO button!

Running FSL on BlueBEAR

FSL jobs are now submitted in an automated way to a back-end high performance computing cluster on BlueBEAR for execution. Processing time for this analysis will vary but will probably be about 5 mins per run.

"},{"location":"workshop5/first-level-analysis/#monitoring-and-viewing-the-results","title":"Monitoring and viewing the resultsSeeing the effect of other parametersAnalysing other participants' data","text":"

FEAT has a built-in progress watcher, the 'FEAT Report', which you can open in a web browser.

To do that, you need to navigate inside the p01_s1.feat folder from the BlueBEAR Portal as below and from there select the report.html file, and either open it in a new tab or in a new window.

Watch the webpage for progress. Refresh the page to update and click the links (the tabs near the top of the page) to see the results when available (the 'STILL RUNNING' message will disappear when the analysis has finished).

Example FEAT Reports for processes that are still running, and which have completed.

After it has completed, first look at the webpage, click on the various links and try to understand what each part means.

Now let's use FSLeyes to look at the output in more detail. To do that you will need to open a separate terminal and load FSLeyes:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/recon/p01\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes &\n

Open the p01_s1.feat folder and select the filtered_func_data (this is the fMRI data after it has been preprocessed by motion correction etc).

Put FSLeyes into movie mode and see if you can identify areas that change in activity.

Now, add the thresh_zstat1 image and try to identify the time course of the stimulation in some of the most highly activated voxels. You should remember how to complete the above tasks from previous workshops. You can also use the \u201ccamera\u201d icon to take a snapshot of the results.

Let's have a look and see the effects that other parameters have on the data. To do this, do the following steps:

Note that each time you rerun FEAT, it creates a new folder with a '+' sign in the name. So you will have folders rather messily named p01_s1.feat, p01_s1+.feat, p01_s1++.feat, and so on. This is rather wasteful of of your precious quota space, so you should delete unnecessary ones after looking at them.

For example, if you wanted to remove all files and directories that end with '+' for participant 1:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/ \nrm -rf *+\n

You might want also to change the previous output directory name to have a more meaningful name in order to make it more obvious what parameter has changed, e.g. p01_s1_motion_off.feat.

For participant 2, you will need to repeat the main steps above:

To rerun a FEAT analysis, rather than re-entering all the model details:

Now change the input 4D file, the output directory name, and the registration details (the BET'ed reoriented T1 for participant 2), and hit 'Go'.

Design files

You can also save the design files (design.fsf) using the 'Save' button on the FEAT GUI. You can then edit this in a text editor, which is useful when running large group studies. You can also run FEAT from the command line, by giving it a design file to use e.g., feat my_saved_design.fsf. We will take a look at modifying design.fsf files directly in the Functional Connectivity workshop.

Running a first-level analysis on the remaining participants

In your own time, you should analyse the remaining participants as above.

Remember:

There are therefore 29 separate analyses that need to be done.

Analyze each of these 29 fMRI runs independently and put the output of each one into a separate, clearly labelled directory as suggested above.

Try and get all these done before the next fMRI workshop in week 10 on higher level fMRI analysis as you will need this processed data for that workshop. You have two weeks to complete this task.

Scripting your analysis

It will seem laborious to re-write and re-run 29 separate FEAT analyses; a much quicker way is by scripting our analyses using bash. If you would like, try scripting your analyses! We will learn more about bash scripting in the next workshop.

As always, help and further information is also available on the relevant section of the FSL Wiki.

"},{"location":"workshop5/preprocessing/","title":"Troubleshooting brain extraction with BET","text":"

In the first part of the workshop, you will learn the proper skull-stripping of T1 scans using FSL's Brain Extraction Tool (BET), including troubleshooting techniques for problematic cases, as well as organizing neuroimaging data files through proper naming conventions.

Background and set-up

The data that we will be using are data collected from 15 participants scanned on the same experimental protocol on the Phillips 3T scanner (our old scanner).

The stimulus protocol was a visual checkerboard reversing at 2Hz (i.e., 500ms between each reversal) and was presented alternately (20s active \u201con\u201d checkerboard, 20s grey screen \u201coff\u201d), starting and finishing with \u201coff\u201d and including 4 blocks of \u201con\u201d (i.e., off, on, off, on, off, on, off, on, off) = 180 sec.

A few extra seconds of \u201coff\u201d (6-8s) were later added at the very end of the run to match the number of volumes acquired by the scan protocol.

Normally in any experiment it is very important to keep all the protocol parameters fixed when acquiring the neuroimaging data. However, in this case we can see different parameters being used which reflect slightly different \u201cbest choices\u201d made by different operators over the yearly demonstration sessions:

Sequence order

Note that sometimes the T1 was the first scan acquired after the planning scan, sometimes it was the very last scan acquired.

Now that we know what the data is, let's start our analyses. Log in into the BlueBEAR Portal and start a BlueBEAR GUI session (2 hours). You should know how to do it by now from previous workshops.

Open a new terminal window and navigate to your MRICN project folder:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx [where XXX=your ADF username]

Please check that you are in the correct directory by typing pwd. This should return:

/rds/projects/c/chechlmy-chbh-mricn/xxx (where XXX = your login username)

You now need to create a copy of the reconstructed fMRI data to be analysed during the workshop but in your own MRICN folder. To do this, in your terminal type:

cp -r /rds/projects/c/chechlmy-chbh-mricn/module_data/recon/ .

Be patient as this might take few minutes to copy over. In the meantime, we will revisit BET and learn how to troubleshoot the often problematic process of 'skull-stripping'.

"},{"location":"workshop5/preprocessing/#skull-stripping-t1-scans-using-bet-on-the-command-line","title":"Skull-stripping T1 scans using BET on the command-line","text":"

We will now look at how to \u201dskull-strip\u201d the T1 image (remove the skull and non-brain areas), as this step is needed as part of the registration step in the fMRI analysis pipeline. We will do this using FSL's BET on the command line. As you should know from previous workshops the basic command-line version of BET is:

bet <input> <output> [options]

where:

We will firstly explore the different options and how to troubleshoot brain extraction.

If the fMRI data has finished copying over, you can use the same terminal which you have previously opened. If not, keep that terminal open and instead open a new terminal, navigating inside your MRICN project folder (i.e., /rds/projects/c/chechlmy-chbh-mricn/xxx)

Next you need to copy the data for this part of the workshop. As there is only 1 file, it will not take a long time.

Type:

cp -r /rds/projects/c/chechlmy-chbh-mricn/module_data/BET/ .

And then load FSL and FSLeyes by typing:

module load FSL/6.0.5.1-foss-2021a\nmodule load FSLeyes/1.3.3-foss-2021a\n

After this, navigate inside the copied BET folder and type:

bet T1.nii.gz T1_brain1

Open FSLeyes (fsleyes &), and when this is open, load up the T1 image, and add the T1_brain1 image. Change the colour for the T1_brain1 to Red.

This will likely show that the default brain extraction was not very good and included non-brain matter. It may also have cut into the brain and thus some part of the cortex is missing. The reason behind the poor brain extraction is a large field-of-view (FOV) (resulting in the head plus a large amount of neck present).

There are different ways to fix a poor BET output i.e., problematic \u201dskull-stripping\u201d.

First of all, you can use the -R option.

This option is used to run BET in an iterative fashion which allows it to better determine the centre of the brain itself.

In your terminal type:

bet T1.nii.gz T1_brain2 -R

Running BET recursively from the BET GUI

Instead of using the bet -R command from the terminal, you can also use the BET GUI. To run it this way, you would need to select the processing option \u201cRobust brain centre estimation (iterates bet2 several times)\u201d from the pull down menu.

You will find that running BET with -R option takes longer than before because of the extra iterations. Reload the newly extracted brain (T1_brain2) into FSLeyes and check that the extraction now looks improved.

In the case of T1 images with a large FOV, you can first crop the image (to remove portion of the neck) and run BET again. To do that you need to use command robustfov before applying BET. But first rename the original image.

Type in your terminal:

immv T1 T1neck` \nrobustfov -i T1neck -r T1 \nbet T1.nii.gz T1_brain3 -R \n

Reload the newly extracted brain (T1_brain3) into FSLeyes and compare it to T1_brain1 and to check that the extraction looks improved. Also compare the cropped T1 image to the original one with a large FOV (T1neck).

Another option is to leave the large FOV and to manually set the initial centre by hand via the -c option on the command line. To do that you need to firstly examine the T1 scan in FSLeyes to get a rough estimation (in voxels) of where the centre of the brain is.

There is another BET option which can improve \u201dskull stripping\u201d: the fractional intensity threshold, which by default is set to 0.5. You can change it from any value between 0-1. Smaller values give larger brain outline estimates (and vice versa). Thus, you can make it smaller if you think that too much brain tissue has been removed.

To use it, you would need to use the -f option (e.g., bet T1.nii.gz T1_brain -f 0.3).

Changing the fractional intensity

In your own time (after the workshop) you can check the effect of changing the fractional intensity threshold to 0.1 and 0.9 (however make sure you name the outputs accordingly, so you know which one is which).

It is very important that after running BET you always examine (using FSLeyes) the quality of the brain extraction process performed on each and every T1.

The strategy you might need to use could be different for participants in the same study. You might need to try different options. The general recommendation is to combine the cropping (if needed) and the -R option. However, it may not work for all T1 scans, some types of T1 scans work better with one strategy than with another. Therefore, it is good to always try a range of options.

Now you should be able to \u201cskull-strip\u201d T1 scans as needed for fMRI analyses!

"},{"location":"workshop5/preprocessing/#exploring-the-data-and-renaming-the-mri-scan-files","title":"Exploring the data and renaming the MRI scan files","text":"

By now you should have a copy of the reconstructed fMRI data in your own folder. As described above, the /recon version of the directory contains the MRI data from 15 participants acquired over several years from various site visits.

The datasets have been reconstructed into the NIFTI format. The T1 images in each directory are named T1.nii.gz. The first (planning) scan sequences (localisers) have been removed in each directory as these will not be needed for any subsequent analysis we are doing.

Navigate inside the recon folder and list the contents of these directories (using the ls command) to make sure they actually contain imaging files. Note that all the imaging data here should be in NIFTI format.

You should see the set of participant directories labelled p01, p02 etc., all the way up to the final directory,p15.

The directory structure should look like this:

~/chechlmy-chbh-mricn/xxx/recon/\n                              \u251c\u2500\u2500 p01/\n                              \u251c\u2500\u2500 p02/\n                              \u251c\u2500\u2500 p03/\n                              \u251c\u2500\u2500 p04/\n                              \u251c\u2500\u2500 p05/\n                              \u251c\u2500\u2500 ...\n                              \u251c\u2500\u2500 p13/\n                              \u251c\u2500\u2500 p14/\n                              \u2514\u2500\u2500 p15/\n

Verifying the data structure

Please verify that you have this directory structure before proceeding!

Explore what\u2019s inside each participant folder. Please note that each participant folder only contains reconstructed data. It\u2019s a good idea to store raw and reconstructed data separately. At this point you should have access to reconstructed participants p01 to p15. The reconstructed data should be in folders named ~/chechlmy-chbh-mricn/xxx/recon/p01 etc.

However, apart from the T1 images that have been already renamed for you, the other reconstructed files in this directory will have unusual names, created automatically by the dcm2nii conversion program.

You can see this by typing into your terminal:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/recon/p03 \nls\n

Which should list:

fs004a001.nii.gz\nfs005a001.nii.gz\nT1.nii.gz\n

It is poor practice to keep with these names as they do not reflect the actual experiment and will likely be a source of confusion later on. We should therefore rename the files to be something meaningful. For this participant (p03) the first fMRI scan is file 1 (fs004001.nii.gz) and the second fMRI scan is file 2 (fs005a001.nii.gz). Rename the files as follows (to do that you need to be inside folder p03):

immv fs004a001 fmri1 \nimmv fs005a001 fmri2\n

Renaming files

The immv command is a special FSL Linux command that works just like the standard Linux mv command except that it automatically takes care of the filename extensions. It saves from having to write out: mv fs004a001.nii.gz fmri1.nii.gz which would be the standard Linux command to rename a file.

You can of course name these files to anything you want. In principle, you could call the fMRI scan run1 or fmri_run1 or epi1 or whatever. The important thing is that you need to be extremely consistent in the naming of files for the different participants.

For this workshop we will use the naming convention above and call the files fmri1.nii.gz and fmri2.nii.gz.

As the experimenter you would normally be familiar with the order of acquisition of the different sequences and therefore the order of the resulting files created, including which one is the T1 image. You would write these down in your research log book whilst acquiring the MRI data. But sometimes, as here, if data is given to you later it may not be clear which particular file is the T1 image.

There are several ways to figure this out:

  1. The very first file will always (unless it has been deleted before it got to you) be a planning scan (localizer). This can be ignored. In general, the T1 image is very likely to be either the second file or the very last file.
  2. If you look at the list of file sizes (using ls -al) you should be able to see that the T1 image is smaller than most typical EPI fMRI images. Also, if there are more than one fMRI sequences (as here with p03 onwards) you will also see that several files have the same file size and the odd one out is the T1.
  3. If you load the images into FSLeyes and look at them individually it should be very obvious which image is the T1. Remember the T1 image is a single volume, in high spatial resolution. It will also likely have a much larger field of view (showing all of the skull and part of the spine). The fMRI images will consist of many volumes (click through several volumes to check), be of lower spatial resolution (it will look coarser) and have a more limited field of view.

If you have access to the NIFTI format files (.nii.gz as we have here) then you can use one of the FSL command line tools (in a terminal window) called fslinfo to examine the protocol information on the file. This will show you the number of volumes in the acquisition (remember this is 1 volume for a T1 image) as well as other information about the number of voxels and the voxel size.

Together this information is sufficient to work out which file is the T1 and which are the fMRI sequence(s).

For example if you type the following in your terminal:

cd ..\ncd p08 \nfslinfo fs005a001.nii.gz\n

You should see something like the image below:

Before proceeding to the next section on running a first-level fMRI analysis, close your terminal.

"},{"location":"workshop5/workshop5-intro/","title":"Workshop 5 - First-level fMRI analysis","text":"

Welcome to the fifth workshop of the MRICN course!

The module lectures provide a basic introduction to fMRI concepts and the theory behind fMRI analysis, including the physiological basis of the BOLD response, fMRI paradigm design, pre-processing and single subject model-based analysis.

In this workshop you will learn how to analyse fMRI data for individual subjects (i.e., at the first level). This includes running all pre-processing stages and the first level fMRI analysis itself. The aim of this workshop is to introduce you to some of the core FSL tools used in the analysis of fMRI data and to gain practical experience with analyzing real fMRI data.

Specifically, we will explore FEAT (FMRI Expert Analysis Tool, part of FSL) to walk you through basic steps in first level fMRI analysis. We will also revisit the use of Brain Extraction Tool (BET), and learn how to troubleshoot problematic \u201dskull-stripping\u201d for certain cases.

Overview of Workshop 5

Topics for this workshop include:

We will not go into details as to why and how specific values of the default settings have been chosen. Some values should be clear to you from the lectures or resource list readings, please check there or if you are still unclear feel free to ask. We will explore some general examples. Note that for your own projects you are very likely to want to change some of these settings/parameters depending on your study aims and design.

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 05 workshop materials.

"},{"location":"workshop6/running-containers/","title":"Scripting analyses, submitting jobs on the cluster, and running containers","text":""},{"location":"workshop6/running-containers/#introduction-to-scripting","title":"Introduction to scripting","text":"

Script files are text files that contain Linux command-line instructions that are equivalent to typing a series of commands in the terminal or the equivalent using the software GUI (e.g., FSL GUI). By scripting, it is possible to automate most of the FSL processing, including both diffusion MRI and fMRI analysis. In the previous workshop you were learning how to set up a first level fMRI model for a single experimental run for one participant. Subsequently, in your own time, you were asked to repeat that process for other participants (15 participants in total), some participants with 2 or 3 experimental runs.

The notes below provide a basic introduction to Linux (bash) scripting as well as some guidelines and examples on how to automate the first-level analysis as you might want to do when completing the Data Analysis assignment.

To do this, we have provided some basic info and script examples. You can use the examples when analysing the remaining participants' data (the task you were given to complete at the end of the previous workshop) if you have not done it already. If you have already done so, you can either repeat that process by scripting, or apply it to your assessment. But you can also complete all these tasks without scripting.

All the example scripts shown below are in the folder:

/rds/projects/c/chechlmy-chbh-mricn/module_data/scripts\n

To start please copy the entire folder into your module directory. Please note that to run some of the scripts as per examples below, you will need to load FSL, something you should know how to do.

A script can be very simple, containing just commands that you already know how to use, with each command put on a separate line. To create a script for automating FSL analysis, the most widely used language is bash (shell). To write a bash script you need a plain text editor (e.g., vim, nano). If you are not familiar with using a text editor in Linux terminal, there is a simple way of creating and/or editing scripts using the BlueBEAR portal.

You can start a new script by clicking on \u201cNew File\u201d and naming it for example \u201cmy_script.sh\u201d and next clicking on \u201cEdit\u201d to start typing commands you want to use. You can also use \u201cEdit\u201d to edit existing scripts.

The shebang

The first line of every bash script should be #!/bin/bash. This is the 'shebang' or 'hashbang' line. It tells the system which interpreter should be used to execute the script.

Suppose we want to create a very simple script repeatedly using one of the FSL command line tools. Let's say that we want to find out what is the volume for each of the T1 brains in our experiment. The FSL command that will tell us this is fslstats, together with the -V option which lists the number and volume of non-empty voxels.

To create this script, you would type the text below as in the provided brainvols.sh script example. To view it, select this script and click on 'Edit'. Alternatively, you can start a new file and copy the commands as shown below. (In the actual script you would need to replace xxx with the name of your own directory).

#!/bin/bash\n\ncd rds/projects/c/chechlmy-chbh-mricn/xxx/recon\nfslstats p01/T1_brain -V\nfslstats p02/T1_brain -V\nfslstats p03/T1_brain -V\nfslstats p04/T1_brain -V\nfslstats p05/T1_brain -V\nfslstats p06/T1_brain -V\nfslstats p07/T1_brain -V\nfslstats p08/T1_brain -V\nfslstats p09/T1_brain -V\nfslstats p10/T1_brain -V\nfslstats p11/T1_brain -V\nfslstats p12/T1_brain -V\nfslstats p13/T1_brain -V\nfslstats p14/T1_brain -V\nfslstats p15/T1_brain -V\n

Whether you are editing or creating a new script, you need to save it. After saving, exit the editor.

Next you need to make the script executable (as below) and remember the script will run in the current directory (pwd). You also need to make the script executable if you copied a script from someone else.

To make your script executable type in your terminal: chmod a+x brainvols.sh

Running the script without permissions

If you try to run the script without making it executable, you will get a permission error.

To run the script, type in your terminal: ./brainvols.sh

You can now tell which participant has the biggest brain.

The previous script hopefully worked. But it is not very elegant and is not much of an improvement over typing the commands one at a time in a terminal window. However, the bash scripting language that we are using provides an extra layer of simple program commands that we can use in combination with the FSL commands we want to run. In this way we can make our scripts more efficient:

Bash has a for/do/done construct to do the former and an echo command to do the latter. So, let's use these to create an improved script with a loop. This is illustrated in the example brainvols2.sh:

#!/bin/bash\n\ncd rds/projects/c/chechlmy-chbh-mricn/xxx/recon\nfor p in p01 p02 p03 p04 p05 p06 p07 p08 p09 p10 p11 p12 p13 p14 p15\ndo\n    echo -n \"Participant ${p}: \"\n    fslstats ${p}/T1_brain -V\ndone\n

Both examples above assume that you have already run BET (brain extraction) on T1 scans. But of course, you could also automate the process of brain extraction and complete both tasks, i.e., running bet and calculate volume, using a single script. This is illustrated in the example bet_brainvols.sh:

#!/bin/bash\n\n# navigate to the folder containing the T1 scans\ncd rds/projects/c/chechlmy-chbh-mricn/xxx/recon\n\n# a for loop over participant data files\nfor participant_num in p01 p02 p03 p04 p05 p06 p07 p08 p09 p10 p11 p12 p13 p14 p15\n\n# do the following ...\ndo\n    # show the participant number \n    printf \"Participant ${participant_num}: \\n\"\n\n    # delete the old extracted brain image (we can see this\n    # happen in real time in the file explorer)\n    rm ${participant_num}/T1_brain.nii.gz\n\n    # extract the brain from the T1 image\n    bet ${participant_num}/T1.nii.gz ${participant_num}/T1_brain\n\n    # list the number of non-empty brain voxels\n    fslstats ${participant_num}/T1_brain -V\n# end the loop\ndone\n\n# ============================================================\n# END OF SCRIPT\n

Some of the most powerful scripting comes when manipulating FEAT model files. When you create a design for a first level fMRI analysis in the FEAT GUI and press the 'Go' button, FEAT writes out the model analysis file into the output directory. The name for this saved file is design.fsf. Once you have created one of these files, you can load it back into FEAT and modify only the parts that are different between the different analyses and then resave it e.g., change parameters or change it for another participant (see workshop materials covering first level of fMRI analysis).

Alternatively, since design.fsf is a text file it can also be opened (and edited) in a text editor. Because the experiment \u2013 and therefore the model design \u2013 is almost the same for all participants, there is very little difference in the design.fsf files between the level one analyses for different participants. In fact, if following the directory structure naming convention suggested in the workshop, the only thing that changes for a particular run is the identifier of the participant.

So, if we copy the design file for p01's first scan (i.e. the file feat/1/p01_s1.feat/design.fsf), open it up in a text editor, search and replace every instance of p01 with p02 and then save it, we should have the model file for p02's first scan. The only differences should be in:

In general, all the model files will only differ by the participant identifiers (p01-p15) the identifiers we've used for the particular scan number (s1, s2 and s3 for output directories, and fmri1, fmri2 and fmri3 in the input EPI file names).

The special cases are the scans for the first two participants. These scans were only 93 volumes long, whereas all the rest of the scans following this are 94 volumes long. However, given the above information and the model file for participant 1, scan 1, we can now create a script that will generate all other model files.

Firstly, let's create a new directory (models) where you will keep your model files. Navigate to your folder (/rds/projects/c/chechlmy-chbh-mricn/xxx/) and type:

mkdir models

Now copy the script create_alldesignmodels.sh into that folder. The script contains the following code:

#!/bin/bash\n\n# Copy over the saved model file for p01 scan 1\ncp /rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1.feat/design.fsf p01_s1.fsf\n\n# Create model file for p02 scan1\ncp p01_s1.fsf p02_s1.fsf\nperl -i -p -e 's/p01/p02/' p02_s1.fsf\n\n# Create model files for p03-p15 scan 1\nfor p in p03 p04 p05 p06 p07 p08 p09 p10 p11 p12 p13 p14 p15\ndo\n    cp p01_s1.fsf ${p}_s1.fsf\n    perl -i -p -e \"s/p01/${p}/\" ${p}_s1.fsf\n    perl -i -p -e 's/93/94/' ${p}_s1.fsf\ndone\n\n# Create model files for p03-p15 scan 2\nfor p in p03 p04 p05 p06 p07 p08 p09 p10 p11 p12 p13 p14 p15\ndo\n    cp ${p}_s1.fsf ${p}_s2.fsf\n    perl -i -p -e 's/_s1/_s2/' ${p}_s2.fsf\n    perl -i -p -e 's/fmri1/fmri2/' ${p}_s2.fsf\ndone\n\n# Create model files for p05 scan 3\ncp p05_s1.fsf p05_s3.fsf\nperl -i -p -e 's/_s1/_s3/' p05_s3.fsf\nperl -i -p -e 's/fmri1/fmri3/' p05_s3.fsf\n

Edit (hint: replace the xxx), and save the file, make it executable, and then run it.

If it has worked you should now have a directory full of model files. Each of them can be run from the command line with a command such as feat p01_s1.fsf or with a script (you should be able to create such script using the earlier example).

FSL's scripting tutorial

You can find more information and other examples on FSL's scripting tutorial webpage.

"},{"location":"workshop6/running-containers/#submitting-jobs-to-the-cluster","title":"Submitting jobs to the cluster","text":"

The first part of this workshop introduced you to running bash scripts using the terminal in the BlueBEAR GUI. However, in addition to running bash scripts in this way, you can also create scripts and run analysis jobs directly on the cluster with Slurm (BlueBEAR's high performance computing (HPC) scheduling system).

What is Slurm?

Understanding how Slurm works is beyond the scope of this course, and is not strictly necessary, but you can find out more by reading the official Slurm documentation.

In previous workshops we were using BlueBEAR Portal to launch BlueBEAR GUI Linux desktop and from there the built-in terminal. As mentioned in workshop 1, you can also use BlueBEAR Portal to jump directly on BlueBEAR terminal, to access one of the available login nodes and from there, run analysis jobs.

While the BEAR Portal provides a convenient web-based access to a range of BlueBEAR services, you don\u2019t have to go via this portal, but can instead use the command line to access BlueBEAR through one of the multiple login nodes, available from the address bluebear.bham.ac.uk.

Exactly how you do that will depend on the type of the operating system your computer uses; you can find detailed information about accessing BlueBEAR using the command line from this link.

The process of submitting and running jobs on the cluster is exactly the same whether using the BlueBEAR terminal via the \u201cClusters\u201d tab on the BlueBEAR portal or using the command line. To run a job with Slurm (BlueBEAR HPC scheduling system) you first need to prepare a job script and then submit using the command sbatch.

In the first part of this workshop you have learned how to create bash scripts to automate FSL analyses; in order to turn these scripts into the job script for Slurm, you need to add a few additional command lines. This is illustrated in the example below: bet_brainvols_job.sh

#!/bin/bash\n\n#SBATCH --qos=bbdefault\n#SBATCH --time=60\n#SBATCH --ntasks=5\n\nmodule purge; module load bluebear\nmodule load FSL/6.0.5.1-foss-2021a\n\nset -e\n\n# navigate to the folder containing the T1 scans\ncd rds/projects/c/chechlmy-chbh-mricn/xxx/recon\n\n# a for loop over participant data files\nfor participant_num in p01 p02 p03 p04 p05 p06 p07 p08 p09 p10 p11 p12 p13 p14 p15\ndo\n    # do the following...\n\n    # show the participant number\n    printf \"Participant ${participant_num}: \\n\"\n\n    # delete the old extracted brain image (we can see this\n    # happen in real time in the file explorer)\n    rm ${participant_num}/T1_brain.nii.gz\n\n    # extract the brain from the T1 image\n    bet ${participant_num}/T1.nii.gz ${participant_num}/T1_brain\n\n    # list the number of non-empty brain voxels\n    fslstats ${participant_num}/T1_brain -V\n\n# end the loop\ndone\n\n# ============================================================\n# END OF SCRIPT\n

This is a modified version of the bet_brainvols.sh script. If you compare the two, you will notice that after the #!/bin/bash line and before the start of the script loop, a few new lines have been added. These define the BlueBEAR resources required to complete the analysis job.

These are explained below:

The task of setting up resources is not always straightforward and will often require several test and error trials as if you do not request sufficient resources and time, your job might fail and if you request too many resources, it might be either rejected or be put in a long queue till require resources become available.

You can find detailed guidelines re specifying required resources in the BlueBEAR documentation.

The script above can be run on the cluster using the BlueBEAR terminal or the command line. To do the latter, you need to use the sbatch command which submits your job to the BlueBEAR scheduling system based on the requested resources. Once submitted it will run on the first available node(s) providing the resources you requested in your script.

For example, to submit your BET job as in the example script above, in the BlueBEAR terminal you would type:

sbatch bet_brainvols_job.sh

The system will return a job number, for example:

Submitted batch job XXXXXX

You need this number to monitor or cancel your job.

To monitor your job, you can use the squeue command by typing in the terminal:

squeue -j XXXXXX

This is a command for viewing the status of your jobs. It will display information including the job\u2019s ID and name, the user that submitted the job, time elapsed and the number of nodes being used.

To cancel a queued or running job, you can use the scancel command by typing in the terminal:

scancel XXXXXX

"},{"location":"workshop6/running-containers/#containers","title":"Containers","text":"

In previous workshops we have been using different pre-installed versions of FSL through different modules available on BEAR apps. Sometimes however, you might need a different (older or newer) version of FSL or a differently pre-compiled FSL. While you can request an up-to-date version of FSL - following the new release of the software it is added to BEAR apps (although it might take a while) - you cannot request to change how FSL is compiled on BEAR apps, as it would affect other BlueBEAR users or might not even be possible due to the BlueBEAR set up.

Instead, you can install FSL within a controlled container and use this contained version instead of what\u2019s available on BEAR apps.

BlueBEAR supports containerisation using Apptainer. Each BlueBEAR node has Apptainer installed, which means that the apptainer command is available without needing to first load a module. Apptainer can download images from any openly available container repository, for example Docker Hub or Neurodesk. Such sites will provide information re available software, software version and how to download a specific container.

Downloading containers

Please do try to download any containers in this workshop!

In the folder scripts, which you copied at the start of this workshop, you will find the subdirectory containers with two FSL containers (two different versions of FSL) downloaded from Neurodesk:

/rds/projects/c/chechlmy-chbh-mricn/module_data/scripts/containers

The simplest way to use a container would be to load it and then use specific commands to run various FSL tools e.g., bet.

For example, you would type in your terminal:

apptainer shell fsl_6.0.7.4_20231005.sing\nbet T1.nii.gz T1_brain.nii.gz\n

You could also use a container in your job script to replace the BEAR apps version of FSL with the FSL container. To do that, you would need to add the line below to your script:

apptainer exec [name of the container]

Below is a very simple example of such a script, example_job_fslcontainer.sh which you can find inside the subdirectory containers:

#!/bin/bash\n#SBATCH --qos=bbdefault\n#SBATCH --time=30\n#SBATCH --ntasks=5\n\nmodule purge; module load bluebear\n\nset -e\n\napptainer exec fsl_6.0.7.4_20231005.sing bet T1.nii.gz T1_brain.nii.gz\n
"},{"location":"workshop6/workshop6-intro/","title":"Workshop 6 - Scripts, containers and running analyses on the academic computing cluster","text":"

Welcome to the sixth workshop of the MRICN course!

Prior workshops introduced you to running MRI analysis with various FSL tools by either using the FSL GUI or typing a simple command in the terminal. In this workshop we will look at how to automate FSL analyses by creating scripts. Subsequently, we will explore how to run FSL scripts more efficiently by submitting jobs to the cluster. The final part of this workshop will introduce how to use FSL containers rather than pre-installed versions of FSL using different modules available on BEAR apps.

Overview of Workshop 6

Topics for this workshop include:

More information

The BEAR Technical Docs provides guidance on submitting jobs to the cluster.

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 06 workshop materials.

"},{"location":"workshop7/advanced-fmri-tools/","title":"Advanced fMRI analysis tools","text":"

The materials included in this worksheet follows on from the previous two FSL-specific fMRI workshops: in the first workshop you ran first level fMRI analysis, whilst in the second workshop you combined the data across scans within each participant (second level analysis) and across the group (third level analysis) in a couple of different ways.

The information included in this page covers:

As in the other materials we will not discuss in detail why you might choose certain parameters. The aim is to familiarise you with some of the available analysis tools. This worksheet will take you, step by step, through these analyses using the FSL tools. You are encouraged to read the pop-up help throughout (hold your mouse arrow over the FSL gui buttons and menus), refer to your lecture notes lectures or resource list readings. You can also find more information on the FSL website.

If you have correctly followed the instructions from the previous workshops, you should by now have 29 first-level FEAT directories for the analysis of each scan of each participant:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/

e.g. /rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1.feat

(1 feat analysis directory for participant 1, 1 for participant 2, 3 for participant 5, and 2 for everyone else).

At the second-level you should have 13 simple within-subjects fixed-effects directories:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/

e.g. /rds/projects/c/chechlmy-chbh-mricn/xxx/feat/ 2/p03.gfeat

(one each for participants 3-15)

You should also have a directory:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/level2all.gfeat (for the all-participants-all-runs second level model)

Finally, you should have 2 third-level group analysis folders:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/3/GroupAnalysis1.gfeat

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/3/GroupAnalysis2.gfeat

(containing the third level group analyses corresponding to the two different ways of combining data at third level)

"},{"location":"workshop7/advanced-fmri-tools/#testing-the-effects-of-different-thresholds","title":"Testing the effects of different thresholds","text":"

Try as many of the suggestions below as you have time for. Try 1 or 2 at the start of the session and returning to these later if you have time, or outside the workshop. (You can load up an existing model by running FEAT and using the 'Load' button. The model file will be called design.fsf and can be found in the .feat or .gfeat directory of an existing analysis folder.)

  1. Ordinary Least Squares (OLS) vs FLAME: Repeat the third level group analyses from FSL Workshop 5, but on the 'Stats' tab select 'Mixed Effects: Simple OLS'

  2. Different correction for multiple comparisons: Repeat the third level group analyses from FSL Workshop 5, but on the 'Post-Stats' tab for 'Thresholding', use the pull down menu to select 'Voxel correction'.

  3. Different thresholds and correction for multiple comparisons: Repeat the third level group analyses from FSL Workshop 5, but on the 'Post-Stats' tab use 'Cluster Thresholding' but choose a different z-threshold.

Examining the results

Look at the results from each method of correction. Use both the webpage output and FSLeyes to look at your data. Find the regions of significant activation. Try looking at a time series (see the 'Data Visualisation' workshop notes for help).

"},{"location":"workshop7/advanced-fmri-tools/#t-test-vs-f-test","title":"T-test vs F-test","text":"

Start up a FEAT GUI, by opening a new terminal window and typing fsl &. Click on the 'Feat' button [or type Feat & in the terminal window to directly open the FEAT GUI].

Load up one of the third level analyses run in the last workshop (e.g., 'GroupAnalysis1' or 'GroupAnalysis2').

Now follow the instructions below:

Data Tab

Stats Tab

Differences between t-test and F-test images

Once it has run inspect the resulting output in a browser. How does the rendered stats image for the F-test (the zfstat image) differ from the t-stat image? Why are they different?

"},{"location":"workshop7/advanced-fmri-tools/#extracting-information-from-regions-of-interest-rois-using-featquery","title":"Extracting information from Regions of Interest (ROIs) using FEATquery","text":"

FEATquery is an FSL tool which allows you to explore FEAT results by extracting information from regions of interests within specific (MNI) coordinates or using a mask.

In the examples below we will get basic stats from two pre-prepared Regions of Interest (ROIs) using the first level models that you have run. You can also make your own ROIs using FSLeyes (you should remember how to create ROI masks from previous workshops).

To start FEATquery, you need to load FSL (see previous workshops), and either - in a terminal - type Featquery & or on the FSL GUI click the button on the bottom right labelled 'Misc' and select the menu option 'Featquery'.

In any case, when FEATQuery is open, following the instructions below:

Input FEAT directories

Stats images of interest

Once you have done that the GUI interface will update and you will see a list of possible statistics.

Input ROI selection

For the 'Mask image' entry select either one of the prepared masks.

Either:

/rds/projects/c/chechlmy-chbh-mricn/module_data/masks/V1.nii.gz

or

/rds/projects/c/chechlmy-chbh-mricn/module_data/masks/Parietal.nii.gz

Output options

When ready, click the 'Go' button!

Examining the FEATquery output

Inspect the results by opening report.html inside the 'V1' folder. Do they make sense?

"},{"location":"workshop7/higher-level-analysis/","title":"Running the higher-level fMRI analysis","text":"

If you have correctly followed the instructions from the previous workshop, you should now have 29 FEAT directories arising from each fMRI scan of each participant, e.g.,

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1.feat  \u2190 Participant 1 scan 1\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p02_s1.feat  \u2190 Participant 2 scan 1\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p03_s1.feat  \u2190 Participant 3 scan 1\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p03_s2.feat  \u2190 Participant 3 scan 2\n(\u2026)\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p15_s2.feat  \u2190 Participant 15 scan 2\n

(where XXX = your particular login (ADF) username).

For participants 1 and 2 you should have only one FEAT directory. For participants 3-4 and 6-15 you should have 2 FEAT directories. For participant 5 you should have 3 FEAT directories. You should therefore have 29 complete first level FEAT directories.

If you haven\u2019t done so already, please check that the output of each and all of these first level analyses looks ok either through the FEAT Report or through FSLeyes. If you would like to use the FEAT Report, select the report (called report.html) from within each FEAT directory from your analysis, e.g.,:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p03_s1.feat/report.html

and either open it in a new tab or in a new window.

Selecting a participant's first-level FEAT output (left) and examining the FEAT Report (right).

Check all of the reports and click on each of the sub-pages in turn:

  1. Check that the analysis ran ok by checking the 'Log' page.
  2. Check that the 'Registration' looks ok.
  3. Check how much they moved on the 'Pre-Stats' page.
  4. Have a quick look at the 'Post-Stats' page.

Motion correction

Participant 5 was scanned three times. In one of these scans they moved suddenly. Use the motion correction results to decide which scan this was. We will ignore this scan for the rest of this workshop. If you have not completed this stage of the analysis, you should do this now before continuing on. Refer to the worksheet from previous workshops.

"},{"location":"workshop7/higher-level-analysis/#second-level-analysis-averaging-across-runs-within-participants","title":"Second-level analysis - averaging across runs within participants","text":"

Most of our participants did the experiment twice \u2013 a repeated measurement. How do we model this data? There are different ways we can do this.

The simplest way is to combine the data within participant before generating group statistics across the entire group of participants. This corresponds with what you might do if you are analysing data as you go along. Here, we will average their data over the two fMRI runs so that we can benefit from the extra power. Participant 5 did the experiment 3 times. For the moment, for this participant, we will choose only the two scans where they moved less for further analysis.

Choose one of the participants that did the experiment twice (not participant 1 or 2), such as participant 3. Open a terminal and load FSL:

module load bear-apps/2022b\nmodule load FSL/6.0.7.6\nsource $FSLDIR/etc/fslconf/fsl.sh\n

Then navigate to the folder where your first-level directories are located and open FEAT:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1\nFeat &\n

At the top left of the GUI you will see a pull down menu labelled 'First-level analysis'. Click here to pull down this menu and choose 'Higher-level analysis' The options available will change.

Now fill out the tabs as below:

Stats

It is necessary to select this now in order to reduce the number of inputs on the 'Data' tab to be only 2 (the default for all other higher level model types is a minimum no of 3 inputs). Note that choosing 'Fixed effects' will ignore cross scan variance, which is fine to do here because these are scans from the same person at the same time.

Data

Once you have selected the FEAT directories for input, the GUI will update to show what COPE files (contrasts) are available for analysis.

The naming scheme here (as with your raw data directories, reconstruction directories and your first level analysis directories) needs to be clear and logical.

It is therefore sensible to use a directory structure like:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2 (directory for all FEAT 2nd level analyses)

For example, you can then put the second level analysis for participant 3 in the subdirectory and so on.

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/p03

Stats (again)

Click the 'Done' button. This will produce a schematic of your design. Check it makes sense and close its window.

Post-Stats

As with the first-level analysis, select the 'Thresholding' pull down option and type 'Uncorrected' and leave the P-threshold value at p<0.05.

Thresholding and processing time

Note this is not the correct thresholding that you will want at the final (third stage) of processing (where you will probably want Cluster thresholding) but at the first and second level stages it will speed up the processing per person.

Click the 'Go' button. Repeat for the other participants.

In the web browser look at the results in the FEAT report (i.e., by opening the report.html). Note that the output folder is called p03.gfeat. You will have to click on the link marked 'Results' and then on the link labelled 'Lower-level contrast 1 (vision)' on that page and then on the 'Post-stats' link.

Comparing second-level and first-level results

Are the results better than for just one scan from p03?

"},{"location":"workshop7/higher-level-analysis/#third-level-analysis-combining-data-across-participants-from-the-second-level","title":"Third-level analysis - combining data across participants from the second level","text":"

In FSL, the procedure for setting up an analysis across participants is very similar to averaging within a participant. The main difference is that we specifically need to set up the analysis to model the inter-subject variance. This allows us to generalise beyond our specific group of participants and to interpret the results as being typical of the wider population.

In this demonstration experiment, 12 participants did the scan twice, 1 was scanned three times, and 2 did the scan only once. (Note that it should be rather obvious that this is not an ideal design for a real experiment). In our case, we have averaged within participants and now we will combine these second level analyses with the first level analyses from those participants who were only scanned once.

Close FEAT if you still have it open. Then open it again by typing Feat &.

Don't close the terminal if you don't have to!

Please note that if you close the terminal here you will first need to load FSL again and navigate back to your folder!

At the top left of the GUI select the pull down menu labelled 'First-level analysis\u2019 and choose 'Higher-level analysis'.

Now complete each of the tabs as described below:

Data

We have 13 participants who did the experiment at least twice (who we combined via a second-level analysis) and 2 who did the experiment only once (who we only analysed at first-level). Therefore, for this next (third level) analysis we will need to combine over first and second level analyses.

In the dialogue that appears you need to add in the path to the .feat directory for each person. For the first-level analyses this is something like:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1.feat\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p02_s1.feat\n

For the participants where you did a second level analysis this will be:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/p03.gfeat/cope1.feat\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/p04.gfeat/cope1.feat\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/p05.gfeat/cope1.feat\n...\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/p15.gfeat/cope1.feat\n

Subdirectory location

Note that the actual feat subdirectories of interest for the second-level analyses are hidden inside the .gfeat directories.

You should now enter a name for the output analysis directory. Use the 'Output' directory entry to choose a directory to put the results in. As with your second-level analyses, the naming scheme here needs to be clear and logical.

It is therefore sensible to use a structure like:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/3 (directory for all FEAT 3rd level analyses)

For example, you can then put this current 3rd level output in the subdirectory:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/3/GroupAnalysis1

Stats

Post-Stats

Accept the defaults.

Now click the 'Go' button!

When the group analysis has finished, check through the output (using the FEAT Report) and try to work out what each page means. For example, the 'Registration Summary' page will highlight areas where the registrations are not aligned between scans/participants. A few missing voxels are ok, but any more than that is a problem as you won't get results from areas where there are missing data.

"},{"location":"workshop7/higher-level-analysis/#second-level-analysis-the-all-in-one-method","title":"Second-level analysis - the 'all-in-one' method","text":"

A more complicated modelling approach at the second-level is to use one single second-level model (instead of separate models per participant) which incorporates all of the information available about participants and runs.

This corresponds with what you might do if you are analysing all the data after it has been collected, all in one go. Depending on the design of the particular experiment, this has the potential to be an improved approach as it allows a better estimate of both the between-run and the between-subject variance.

Close FEAT if you still have it open. Then open it again by clicking on the FEAT button in the FSL GUI (or type Feat & in the terminal window to directly open the FEAT GUI).

Now complete the tabs following the instructions below:

Data

(Note that there are only 28 inputs here as we are going to use all of the first level feat dirs as inputs except for the worst run of the three that participant 5 did).

As always you should now enter a name for the output analysis directory. Use the 'Output' directory entry to choose a directory to put the results in.

As this is a second-level model it should go under the directory:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2 (directory for all FEAT 2nd level analyses)

and should be meaningfully named. For example, you could call it:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2level2all

Stats

Check boxes in FSL

In the older versions of FSL after selecting an option you will see a yellow checkbox, however in the newer versions of FSL such as the one we are using, the checkbox is yellow to start with, and after selecting option you will see a tick \u2714\ufe0f inside the yellow checkbox.

Check it makes sense, and that you understand what it is showing, then close its window.

Post-Stats

Wait for the analysis to complete and then look at the results. Note that the output folder is called level2all.gfeat if you named it as above. You will have to click on the link marked 'Results' and then on the link labelled 'Lower-level contrast 1 (vision)' on that page and then on the 'Post-stats' link. This will then show you a contrast (rendered stats image) for each of the participants.

Comparing the second-level results

Are the results from this bigger model better than the simple fixed effects model for the same participant? For example, with participant p09?

"},{"location":"workshop7/higher-level-analysis/#third-level-analysis-combining-participant-data-from-the-all-in-one-second-level","title":"Third-level analysis - combining participant data from the 'all-in-one' second-level","text":"

We can now estimate the mean group effect by combining across participants from the better second-level analysis we have just calculated above.

Data

In the second-level analysis we just performed we combined data over both participants and runs, effectively collapsing across runs, and the output analyses were then the summary data for each of the 15 participants. Therefore, for this next (third-level) analysis we will again need to combine over the 15 participants.

To do this, follow the steps below:

If you have used the correct naming convention above, this will be:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/level2all.gfeat/cope1.feat/stats/cope1.nii.gz \n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/level2all.gfeat/cope1.feat/stats/cope2.nii.gz \n...\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/level2all.gfeat/cope1.feat/stats/cope15.nii.gz\n

As before, you should now enter a name for the output analysis directory. As this is a third level model it should go under the directory:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/3 (for all FEAT 3rd level analyses)

and should be meaningfully named. For example, you can then put this current 3rd level output in the subdirectory:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/3/GroupAnalysis2

Stats

Post-Stats

Now click the 'Go' button!

Comparing the third-level results

Check through the output when the group analysis has finished. Is the result better than the simple third level analysis above?

If you followed the instructions in workshop materials, you should be able to replicate and see the results as above in the report.html file inside the respective third level analysis folders ('GroupAnalysis1' and 'GroupAnalysis2').

As always, help and further information is also available on the relevant section of the FSL Wiki.

"},{"location":"workshop7/workshop7-intro/","title":"Workshop 7 - Higher-level fMRI analysis","text":"

Welcome to the seventh workshop of the MRICN course!

Prior lectures introduced you to the basic concepts and theory behind higher-level fMRI analysis, including multi-session analysis and the general linear model (GLM). In this workshop you will be learning practical skills in how to run higher level fMRI analysis using FSL tools.

This workshop follows on from the workshop on first-level fMRI analysis. In that workshop you analysed the first level data for 2 participants and at the end of the workshop you were asked to analyse the rest of the scans in the data set. Participants 1-2 had one fMRI experiment run each, participants 3-4 and 6-15 had 2 runs each and participant 5 had 3 runs, so there are a total of 29 runs from the 15 participants.

We will now combine the fMRI data across runs and participants in our second and third-level analyses.

Overview of Workshop 6

Topics for this workshop include:

As in the other workshops we will not discuss in detail why you might choose certain parameters. The aim of this workshop is to familiarise you with some of the available analysis tools. You are encouraged to read the pop-up help throughout (hold your mouse arrow over FSL GUI buttons and menus when setting your FEAT design), refer to your lecture notes lectures or resource list readings.

More information

As always, you can also find more information on running higher-level fMRI analyses on the FSL website.

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 07 workshop materials.

"},{"location":"workshop8/functional-connectivity/","title":"Functional connectivity analysis of resting-state fMRI data using FSL","text":"

This workshop is based upon the excellent FSL fMRI Resting State Seed-based Connectivity tutorial, which has been adapted to run on the BEAR systems at the University of Birmingham, with some additional content covering Neurosynth.

We will run a group-level functional connectivity analysis on resting-state fMRI data of three participants, specifically examining the functional connectivity of the posterior cingulate cortex (PCC), a region of the default mode network (DMN) that is commonly found to be active in resting-state data.

To do this, we will:

"},{"location":"workshop8/functional-connectivity/#preparing-the-data","title":"Preparing the data","text":"

Navigate to your shared directory within the MRICN folder and copy the data over:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx\ncp -r /rds/projects/c/chechlmy-chbh-mricn/aamir_test/SBC .\ncd SBC\nls\n

You should now see the following:

sub1 sub2 sub3\n

Each of the folders has a single resting-state scan, called sub1.nii.gz,sub2.nii.gz and sub3.nii.gz respectively.

We will now create our seed region for the PCC. To do this, firstly load FSL and fsleyes in the terminal by running:

module load FSL/6.0.5.1-foss-2021a\nmodule load FSLeyes/1.3.3-foss-2021a\n

Check that we are in the correct directory (blah/your_username/SBC):

pwd\n

and create a new directory called seed:

mkdir seed\n

Now when you run ls you should see:

seed sub1 sub2 sub3\n

Lets open FSLeyes:

fsleyes &\n
Creating the PCC mask in FSLeyes

We need to open the standard MNI template brain, select the PCC and make a mask.

Here are the following steps:

  1. Navigate to the top menu and click on File \u279c Add standard and select MNI152_T1_2mm_brain.nii.gz.
  2. When the image is open, click on Settings \u279c Ortho View 1 \u279c Atlases. An atlas panel then opens on the bottom section.
  3. Select Atlas information (if it already hasn't loaded).
  4. Ensure Harvard-Oxford Cortical Structural Atlas is selected.
  5. Go into 'Atlas search' and type cing in the search box. Check the Cingulate Gyrus, posterior division (lower right) so that it is overlaid on the standard brain. (The full name may be obscured, but you can always check which region you have loaded by looking at the panel on the bottom right).

At this point, your window should look something like this:

To save the seed, click the save symbol which is the first of three icons on the bottom left of the window.

The window that opens up should be your project SBC directory. Open into the seed folder and save your seed as PCC.

Extracting the time-series

We now need to binarise the seed and to extract the mean timeseries. To do this, leaving FSLeyes open, go into your terminal (you may have to press Enter if some text about dc.DrawText is there) and type:

cd seed\nfslmaths PCC -thr 0.1 -bin PCC_bin\n

In FSLeyes now click File \u279c Add from file, and select PCC_bin to compare PCC.nii.gz (before binarization) and PCC_bin.nii.gz (after binarization). You should note that the signal values are all 1.0 for the binarized PCC.

You can now close FSLeyes.

For each subject, you want to extract the average time series from the region defined by the PCC mask. To calculate this value for sub1, do the following:

cd ../sub1\nfslmeants -i sub1 -o sub1_PCC.txt -m ../seed/PCC_bin\n

This will generate a file within the sub1 folder called sub1_PCC.txt.

We can have a look at the contents by running cat sub1_PCC.txt. The terminal will print out a list of numbers with the last five being:

20014.25528\n20014.919\n20010.17317\n20030.02886\n20066.05141\n

This is the mean level of 'activity' for the PCC at each time-point.

Now let's repeat this for the other two subjects.

cd ../sub2\nfslmeants -i sub2 -o sub2_PCC.txt -m ../seed/PCC_bin\ncd ../sub3\nfslmeants -i sub3 -o sub3_PCC.txt -m ../seed/PCC_bin\n

Now if you go back to the SBC directory and list all of the files within the subject folders:

cd ..\nls -R\n

You should see the following:

This is all we need to run the subject and group-level analyses using FEAT.

"},{"location":"workshop8/functional-connectivity/#running-the-feat-analyses","title":"Running the FEAT analyses","text":""},{"location":"workshop8/functional-connectivity/#single-subject-analysis","title":"Single-subject analysisExamining the FEAT outputScripting the other two subjects","text":"

Close your terminal, open another one, move to your SBC folder, load FSL and open FEAT:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/SBC\nmodule load bear-apps/2022b\nmodule load FSL/6.0.7.6\nsource $FSLDIR/etc/fslconf/fsl.sh\nFeat &\n

We will run the first-level analysis for sub1. Set-up the following settings in the respective tabs:

Data

Number of inputs:

Output directory:

This is what your data tab should look like (with the input data opened for show).

Pre-stats

The data has already been pre-processed, so just set 'Motion correction' to 'None' and uncheck BET. Your pre-stats should look like this:

Registration

Nothing needs to be changed here.

Stats

Click on 'Full Model Setup' and do the following:

  1. Keep the 'Number of original EVs' as 1.
  2. Type PCC for the 'EV' name.
  3. Select 'Custom (1 entry per volume)' for the 'Basic' shape. Click into the sub1 folder and select sub1_PCC.txt. This is the mean time series of the PCC for sub-001 and is the statistical regressor in our GLM model. This is different from analyses of task-based data which will usually have an events.tsv file with the onset times for each regressor of interest.
  4. Select 'None' for 'Convolution', and uncheck both 'Add temporal derivate' and 'Apply temporal filtering'.

What are we doing specifically?

The first-level analysis will subsequently identify brain voxels that show a significant correlation with the seed (PCC) time series data.

Your window should look like this:

In the same General Linear Model window, click the 'Contrast & F-tests' tab, type PCC in the title, and click 'Done'.

A blue and red design matrix will then be displayed. You can close it.

Post-stats

Nothing needs to be changed here.

You are ready to run the first-level analysis. Click 'Go' to run. On BEAR, this should only take a few minutes.

To actually examine the output, go to the BEAR Portal and at the menu bar select Files \u279c /rds/projects/c/chechlmy-chbh-mricn/

Then go into SBC/sub1.feat, select report.html and click 'View' (top left of the window). Navigate to the 'Post-stats' tab and examine the outputs. It should look like this:

We can now run the second and third subjects. As we only have three subjects, we could manually run the other two by just changing three things:

  1. The fMRI data path
  2. The output directory
  3. The sub_PCC.txt path

Whilst it would probably be quicker to do it manually in this case, it is not practical in other instances (e.g., more subjects, subjects with different number of scans etc.). So, instead we will be scripting the first level FEAT analyses for the other two subjects.

The importance of scripting

Scripting analyses may seem challenging at first, but it is an essential skill of modern neuroimaging research. It enables you to automate repetitive processing steps, dramatically reduces the chance of human error, and ensures your research is reproducible.

To do this, go back into your terminal, you don't need to open a new terminal or close FEAT.

The setup for each analysis is saved as a specific file, the design.fsf file within the FEAT output directory. We can see this by opening the design.fsf file for sub1:

pwd # make sure you are in your SBC directory e.g., blah/xxx/SBC\ncd sub1.feat\ncat design.fsf\n

FEAT acts as a large 'function' with its many variables corresponding to the options that we choose when setting up in the GUI. We just need to change three of these (the three mentioned above). In the design.fsf file this corresponds to:

set fmri(outputdir) \"/rds/projects/c/chechlmy-chbh-mricn/xxx/SBC/sub1\"\nset feat_files(1) \"/rds/projects/c/chechlmy-chbh-mricn/xxx/SBC/sub1/sub1/\"\nset fmri(custom1) \"/rds/projects/c/chechlmy-chbh-mricn/xxx/SBC/sub1/sub1_PCC.txt\"\n

To run the script, please copy the run_feat.sh script into your own SBC directory:

cd ..\npwd # make sure you are in your SBC directory\ncp /rds/projects/c/chechlmy-chbh-mricn/axs2210/SBC/run_feat.sh .\n

Viewing the script

If you would like, you can have a look at the script yourself by typing cat run_bash.sh

The first line #!/bin/bash is always needed to run bash scripts. The rest of the code just replaces the 3 things we wanted to change for the defined subjects, sub2 and sub3.

Run the code (from your SBC directory) by typing bash run_feat.sh. (It will ask you for your University account name, this is your ADF username (axs2210 for me)).

The script should take about 5-10 minutes to run on BEAR.

After it has finished running, have a look at the report.html file for both directories, they should look like this:

sub2

sub3

"},{"location":"workshop8/functional-connectivity/#group-level-analysis","title":"Group-level analysisExamining the output","text":"

Ok, so now that we have our FEAT directories for all three subjects, we can run the group level analysis. Close FEAT and open a new FEAT by running Feat & in your SBC directory.

Here are instructions on how to setup the group-level FEAT:

Data

  1. Change 'First-level analysis' to 'Higher-level analysis'
  2. Keep the default option for 'Inputs are lower-level FEAT directories'.
  3. Keep the 'Number of inputs' as 3.
  4. Click the 'Select FEAT directories'. Click the yellow folder on the right to select the FEAT folder that you had generated from each first-level analysis.

Your window should look like this (before closing the 'Input' window):

\u00a0\u00a0\u00a0\u00a05. Keep 'Use lower-level COPEs' ticked.

\u00a0\u00a0\u00a0\u00a06. In 'Output directory' stay in your current directory (SBC), and in the bottom bar, type in PCC_group at the end of the file path.

Don't worry about it being empty, FSL will fill out the file path for us.

If you click the folder again, it should look similar to this (with your ADF username instead of axs2210):

Stats

  1. Leave the 'Mixed effects: FLAME 1' and click 'Full model setup'.
  2. In the 'General Linear Model' window, name the model 'PCC' and make sure the 'EVs' are all 1s.

The interface should look like this:

After that, click 'Done' and close the GLM design matrix that pops up (you don't need to change anything in the 'Contrasts and F-tests' tab).

Post-stats

  1. Change the Z-threshold from 3.1 to 2.3.

Lowering our statistical threshold

Why do you think we are lowering this to 2.3 in our analysis instead of keeping it at 3.1? The reason is because we only have three subjects, we want to be relatively lenient with our threshold value, otherwise we might not see any activation at all! For group-level analyses with more subjects, we would be more strict.

Click 'Go' to run!

This should only take about 2-3 minutes.

While this is running, you can load the report.html through the file browser as you did for the individual subjects.

Click on the 'Results' tab, and then on 'Lower-level contrast 1 (PCC)'. When the analysis has finished, your results should look like this:

These are voxels demonstrating significant functional connectivity with the PCC at a group-level (Z > 2.3).

So, we have just ran our group-level analysis. Let's have a closer look at the outputted data.

Close FEAT and your terminal, open a new terminal, go to your SBC directory and open FSLeyes:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/SBC\nmodule load FSL/6.0.5.1-foss-2021a\nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes &\n

In FSLeyes, open up the standard brain (Navigate to the top menu and click on 'File \u279c Add standard' and select MNI152_T1_2mm_brain.nii.gz).

Then add in our contrast image (File \u279c Add from file, and then go into the PCC_group.gfeat and then into cope1.feat and open the file thresh_zstat1.nii.gz).

When opened, change the colour to 'Red-Yellow' and the 'Minimum' up to 2.3 (The max should be around 3.12). If you set the voxel location to [42, 39, 52] your screen should look like this:

This is the map that we saw in the report.html file. In fact we can double check this by changing the voxel co-ordinates to [45, 38, 46].

Our thresholded image in fsleyes

The FEAT output Our image matches the one on the far right below:

"},{"location":"workshop8/functional-connectivity/#bonus-identifying-regions-of-interest-with-atlases-and-neurosynth","title":"Bonus: Identifying regions of interest with atlases and Neurosynth","text":"

So we know which voxels demonstrate significant correlation with the PCC, but what region(s) of the brain are they located in?

Let's go through two ways in which we can work this out.

Firstly, as you have already done in the course, we can simply just overlap an atlas on the image and see which regions the activated voxels fall under.

To do this:

  1. Navigate to the top menu and click on 'Settings \u279c Ortho View 1 \u279c Atlases'.
  2. Then at the bottom middle of the window, select the 'Harvard-Oxford Cortical Structural Atlas' and on the window directly next to it on the right, click 'Show/Hide'.
  3. The atlas should have loaded up but is blocking the voxels. Change the 'Opacity' to about a quarter.

By having a look at the 'Location' window (bottom left) we can now see that significant voxels of activity are mainly found in the:

Right superior lateral occipital cortex

Posterior cingulate cortex (PCC) / precuneus

Alternatively, we can also use Neurosynth, a website where you can get the resting-state functional connectivity of any voxel location or brain region. It does this by extracting data from studies and performing a meta-analysis on brain imaging studies that have results associated with your voxel/region of interest.

About Neurosynth

While Neurosynth has been superseded by Neurosynth Compose we will use the original Neurosynth in this tutorial.

If you click the following link, you will see regions demonstrating significant connectivity with the posterior cingulate.

If you type [46, -70, 32] as co-ordinates in Neurosynth, and then into the MNI co-ordinates section in FSLeyes, not into the voxel location, because Neurosynth works with MNI space, you can see that in both cases the right superior lateral occipital cortex is activated.

Image orientation

Note that the orientations of left and right are different between Neurosynth and FSLeyes!

Neurosynth

FSLeyes

This is a great result given that we only have three subjects!

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"MRI on BEAR","text":"

MRI on BEAR is a collection of educational resources created by members of the Centre for Human Brain Health (CHBH), University of Birmingham, to provide a basic introduction to fundamentals in magnetic resonance imaging (MRI) data analysis, using the computational resources available to the University of Birmingham research community.

"},{"location":"#about-this-website","title":"About this website","text":"

This website contains workshop materials created for the MSc module 'Magnetic Resonance Imaging in Cognitive Neuroscience' (MRICN) and its earlier version - Fundamentals in Brain Imaging taught by Dr Peter C. Hansen - at the School of Psychology, University of Birmingham. It is a ten-week course consisting of lectures and workshops introducing the main techniques of functional and structural brain mapping using MRI with a strong emphasis on - but not limited to - functional MRI (fMRI). Topics include the physics of MRI, experimental design for neuroimaging experiments and the analysis of fMRI, and other types of MRI data. This website includes only the workshop materials, which provide a basic training in analysis of brain imaging data and data visualization.

Learning objectives

At the end of the course you will be able to:

For externals not on the course

Whilst we have made these resources publicly available for anyone to use, please BEAR in mind that the course has been specifically designed to run on the computing resources at the University of Birmingham.

"},{"location":"#teaching-staff","title":"Teaching Staff","text":"Dr Magdalena Chechlacz

Role: Course Lead

Magdalena Chechlacz is an Assistant Professor in Cognition and Ageing at the School of Psychology, University of Birmingham. She initially trained and carried out a doctorate in Cellular and Molecular Biology in 2002, and after working as a biologist at the University of California, San Diego, decided on a career change to a more human-oriented science and neuroimaging. She completed a second doctorate in psychology at the University of Birmingham under the supervision of Glyn Humphreys in 2012, and from 2013 to 2016, held a British Academy Postdoctoral Fellowship and EPA Cephalosporin Junior Research Fellowship at Linacre College, University of Oxford. In 2016, Dr Chechlacz returned to the School of Psychology, University of Birmingham as a Bridge Fellow.

Aamir Sohail

Role: Teaching Assistant

Aamir Sohail is an MRC Advanced Interdisciplinary Methods (AIM) DTP PhD student based at the Centre for Human Brain Health (CHBH), University of Birmingham, where he is supervised by Lei Zhang and Patricia Lockwood. He completed a BSc in Biomedical Science at Imperial College London, followed by an MSc in Brain Imaging at the University of Nottingham. He then worked as a Junior Research Fellow at the Centre for Integrative Neuroscience and Neurodynamics (CINN), University of Reading. Outside of research, he is also passionate about facilitating inclusivity and diversity in academia, as well as promoting open and reproducible science.

"},{"location":"#course-overview","title":"Course Overview","text":"Workshop Key Concepts/Tools Getting Started BEAR Portal, BEAR Storage, BlueBEAR Workshop 1 BlueBEAR GUI, Linux commands Workshop 2 MRI data formats, FSLeyes, MRI atlases Workshop 3 DTIfit, TBSS, BET Workshop 4 Probabilistic tractography, BEDPOSTX, PROBTRACKX Workshop 5 FEAT, First-level fMRI analysis Workshop 6 Bash scripting, Submitting jobs, Containers Workshop 7 Higher-level fMRI analysis, FEATquery Workshop 8 Resting-state fMRI, Functional connectivity, Neurosynth

Accessing additional course materials

If you are a CHBH member and would like access to additional course materials (lecture recordings etc.), please contact one of the teaching staff members listed above.

"},{"location":"#license","title":"License","text":"

MRI on BEAR is hosted on GitHub. All content in this book (i.e., any files and content in the docs/ folder) is licensed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license. Please see the LICENSE file in the GitHub repository for more details.

"},{"location":"contributors/","title":"Contributors","text":"

Many thanks to our contributors for creating and maintaining these resources!

Andrew Quinn\ud83d\udea7 \ud83d\udd8b Aamir Sohail\ud83d\udea7 \ud83d\udd8b James Carpenter\ud83d\udd8b Magda Chechlacz\ud83d\udd8b

Acknowledgements

Thank you to Charnjit Sidhu for their help and support in developing these materials!

"},{"location":"resources/","title":"Additional Resources","text":"

For those wanting to develop their learning beyond the scope of the module, here is a (non-exhaustive) list of links and pages for neuroscientists covering skills related to working with neuroimaging data, both with the concepts and practical application.

Contributing to the list

Feel free to suggest additional resources to the list by opening a thread on the GitHub page!

FSL Wiki

Most relevant to the course is the FSL Wiki, the comprehensive guide for FSL by the Wellcome Centre for Integrative Neuroimaging at the University of Oxford.

"},{"location":"resources/#existing-lists-of-resources","title":"Existing lists of resources","text":"

Here are some current 'meta-lists' which already cover a lot of resources themselves:

"},{"location":"resources/#neuroimaging","title":"Neuroimaging","text":"Conceptual understanding

Struggling to grasp the fundamentals of MRI/fMRI? Want to quickly refresh your mind on the physiological basis of the BOLD signal? Well, these resources are for you!

Analysis of fMRI data "},{"location":"resources/#programming","title":"Programming","text":"Unix/Linux "},{"location":"resources/#textbooks","title":"Textbooks","text":""},{"location":"setting-up/","title":"Accessing BlueBEAR and the BEAR Portal","text":"

Before you start with any workshop materials, you will need to familiarise yourself with the CHBH\u2019s primary computational resource, BlueBEAR. The following pages are aimed at helping you get started.

To put these workshop materials into practical use you will be expected to understand what BlueBEAR is, what it is used for and to make sure you have access.

Student Responsibility

If you are an MSc student taking the MRICN module, please note that while there will be help available during all in person workshops, in case you have any problems with using the BEAR Portal, it is your responsibility to make sure that you have access, and that you are familiar with the information provided in pre-workshop materials. Failing to gain an understanding of BlueBEAR and using the BEAR Portal will prevent you from participating in the practical sessions and completing the module\u2019s main assessment (data analysis).

"},{"location":"setting-up/#what-are-bear-and-bluebear","title":"What are BEAR and BlueBEAR?Signing in to the BEAR Portal","text":"

BEAR stands for Birmingham Environment for Academic Research and is a collection of services provided specifically for researchers at the University of Birmingham. BEAR services are used by researchers at the Centre for Human Brain Health (CHBH) for various types of neuroimaging data analysis.

BEAR services and basic resources - such as the ones we will be using for the purpose of the MRICN module - are freely available to the University of Birmingham research community. Extra resources which may be needed for some research projects can be purchased e.g., access to dedicated nodes and extra storage. This is something your PI/MSc/PhD project supervisor might be using and will give you access to.

BlueBEAR refers to the Linux High Performance Computing (HPC) environment which:

  1. Enables researchers to run jobs simultaneously on many servers (thus providing fast and efficient processing capacity for data analysis)
  2. Gives easy access to multiple apps, software libraries (e.g., software we will be using in this module to analyse MRI data), as well as various software development tools

As computing resources on BlueBEAR rely on Linux, in Workshop 1 you will learn some basic commands, which you will need to be familiar with to participate in subsequent practical sessions and to complete the module\u2019s main assessment (data analysis assessment). More Linux commands and basic principle of scriptings will be introduced in subsequent workshops.

There are two steps to gaining access to BlueBEAR:

Gaining access to BEAR Projects

Only a member of academic staff e.g., your project supervisor or module lead, can apply for a BEAR project. As a student you cannot apply for a BEAR project. If you are registered as a student on the MRICN module, you should have already been added as member to the project chechlmy-chbh-mricn. If not please contact one of the teaching staff.

Even if you are already a member of a BEAR project giving you BlueBEAR access, you will still need to activate your BEAR Linux account via the self-service route or the service desk form. The information on how to do it and step-by-step instructions are available on the University Advanced Research Computing page, see the following link.

Please follow these steps as above to make sure you have a BEAR Linux account before starting with workshop 1 materials. To do this you will need to be on campus or using the University Remote Access Service (VPN).

After you have activated your BEAR Linux account, you can now sign-in to the BEAR Portal.

BEAR Portal access requirements

Remember that the BEAR Portal is only available on campus or using the VPN!

If your log in is successful, you will be directed to the main BEAR Portal page as below. This means that you have successfully launched the BEAR Portal.

If you get to this page, you are ready for Workshop 1. For now, you can log out. If you have any problems logging on to BEAR Portal, please email chbh-help@contacts.bham.ac.uk for help and advice.

"},{"location":"setting-up/#bear-storage","title":"BEAR Storage","text":"

The storage associated with each BEAR project is called the BEAR Research Data Store (RDS). Each BEAR project gets 3TB of storage space for free, but researchers (e.g., your MSc project supervisor) can pay for additional storage if needed. The RDS should be used for all data, job scripts and output on BlueBEAR.

If you are registered as a student on the MRICN module, all the data and resources you will need to participate in the MRICN workshops and to complete the module\u2019s main assessment have been added to the MRICN module RDS, and you have been given access to the folder /rds/projects/c/chechlmy-chbh-mricn. When working on your MSc project using BEAR services, your supervisor will direct you to the relevant RDS project.

External access to data

If you are not registered on the module and would like access to the data, please contact one of the teaching staff members.

"},{"location":"setting-up/#finding-additional-information","title":"Finding additional information","text":"

There is extensive BEAR technical documentation provided by the University of Birmingham BEAR team (see links below). While for the purpose of this module, you are not expected to be familiar with all the provided there information, you might find it useful if you want to know more about computing resources available to researchers at CHBH via BEAR services, especially if you will be using BlueBEAR for other purposes (e.g., for your MSc project).

You can find out more about BEAR, BlueBEAR and RDS on the dedicated BEAR webpages:

"},{"location":"workshop1/intro-to-bluebear/","title":"Introduction to the BlueBEAR Portal","text":"

At this point you should know how to log in and access the main BEAR Portal page.

Please navigate to https://portal.bear.bham.ac.uk, log in and launch the BEAR Portal; you should get to the page as below.

BlueBEAR Portal is a web-based interface enabling access to various BEAR services and BEAR apps including:

BlueBEAR portal is basically a user friendly alternative to using the command line interface, your computer terminal.

To view all files and data you have access to on BlueBEAR, click on 'Files' as illustrated above. You will see your home directory (your BEAR Linux home directory), and all RDS projects you are a member of.

You should be able to see /rds/projects/c/chechlmy-chbh-mricn (MRICN module\u2019s RDS project). By selecting the 'Home Directory' or any 'RDS project' you will open a second browser tab, displaying the content. In the example below, you see the content of one of Magda's projects.

Inside the module\u2019s RDS project, you will find that you have a folder labelled xxx, where xxx is your University of Birmingham ADF username. If you navigate to that folder rds/projects/c/chechlmy-chbh-mricn/xxx, you will be able to perform various file operations from there. However, for now, please do not move, download, or delete any files.

Data confidentiality

Please also note that the MRI data you will be given to work with should be used on BlueBEAR only and not downloaded on your personal desktop or laptop!

"},{"location":"workshop1/intro-to-bluebear/#launching-the-bluebear-gui","title":"Launching the BlueBEAR GUI","text":"

The BlueBEAR Portal options in the menu bar, 'Jobs', 'Clusters' and 'My Interactive Sessions' can be used to submit and edit jobs to run on the BlueBEAR cluster and to get information about your currently running jobs and interactive sessions. Some of these processes can be also executed using the Code Server Editor (VS Code) accessible via Interactive Apps. We won\u2019t explore these options in detail now but some of these will be introduced later when needed.

For example, from the 'Cluster' option you can jump directly on to the BlueBEAR terminal and by using this built-in terminal, submit data analysis jobs and/or employ your own contained version of neuroimaging software rather than software already available on BlueBEAR. We will cover containers, scripting and submitting jobs in Workshop 6. For now, just click on this option and see what happens; you can subsequently exit/close the terminal page.

Finally, from the BlueBEAR Portal menu bar you can select 'Interactive Apps' and from there access various GUI applications you wish to use, including JupyterLab, RStudio, MATLAB and most importantly the BlueBEAR GUI, which we will be using to analyse MRI data in the subsequent workshops.

Please select 'BlueBEAR GUI'. This will bring up a page for you to specify options for your job to start the BlueBEAR GUI. You can leave some of these options as default. But please change \u201cNumber of Hours\u201d to 2 (our workshops will last 2 hours; for some other analysis tasks you might need more time) and make sure that the selected 'BEAR Project' is chechlmy-chbh-mricn. Next click on 'Launch'.

It will take few minutes for the job to start. Once it\u2019s ready you\u2019ll see an option to connect to the BlueBEAR GUI. Click on 'Launch BlueBEAR GUI'.

Once you have launched the BlueBEAR GUI, you are now in a Linux environment, on a Linux Desktop. The following section will guide you on navigating and using this environment effectively.

Re-launching the BlueBEAR GUI

In the main window of the BlueBEAR portal you will be able to see that you have an Interactive session running (the information above will remain there). This is important as if you close the Linux Desktop by mistake, you can click on Launch BlueBEAR GUI again to open it.

"},{"location":"workshop1/intro-to-linux/","title":"Introduction to Linux","text":"

Linux is a computer Operating System (OS) similar to Microsoft Windows or Mac OS. Linux is very widely used in the academic world especially in the sciences. It is derived from one of the oldest and most stable and used OS platforms around, Unix. We use Linux on BlueBEAR. Many versions of Linux are freely available to download and install, including CentOS (Community ENTerprise Operating System) and Debian, which you might be familiar with. You can also use these operating systems with Microsoft Windows in Dual Boot Environment on your laptop or desktop computer.

Linux and neuroimaging

Linux is particularly suited for clustering computers together and for efficient batch processing of data. All major neuroimaging software runs on Linux. This includes FSL, SPM, AFNI, and many others. Linux, or some version of Unix, is used in almost all leading neuroimaging centres. Both MATLAB and Python also run well in a Linux environment.

If you work in neuroimaging, it is to your advantage to become familiar with Linux. The more familiar you are, the more productive you will become. For some of you, this might be a challenge. The environment will present a new learning experience, one that will take time and effort to learn. But in the end, you should hopefully realize that the benefits of learning to work in this new computer environment are indeed worth the effort.

Linux is not like Windows or Mac OSX environments. It is best used by typing commands into a Terminal client and by writing small batch command programs. Frequently you may not even need to use the mouse. Using the Linux environment alone may take some getting used to, but will become more familar throughout the course, as we use them to navigate through our file system and to script our analyses. For now, we will simply explore using the Linux terminal and simple commands.

"},{"location":"workshop1/intro-to-linux/#using-the-linux-terminal","title":"Using the Linux Terminal","text":"

BlueBEAR GUI enables to load various apps and applications by using the Linux environment and a built-in Terminal client. Once you have launched the BlueBEAR GUI, you will see a new window and from there you can open the Terminal client. There are different ways to open Terminal in BlueBEAR GUI window as illustrated below.

Either by selecting from the drop-down menu:

Or by selecting the folder at the bottom of the screen:

In either case you will load the terminal:

Once you have started the terminal you, you will be able to load required applications (e.g., to start the FSL GUI). FSL (FMRIB Software Library) is a neuroimaging software package we will be using in our workshops for MRI data analysis.

When using the BlueBEAR GUI Linux desktop, you can simultaneously work in four separate spaces/windows. For example, if you are planning on using multiple apps, rather than opening multiple terminals and apps in the same space, you can move to another space. You can do that by clicking on \u201cworkspace manager\u201d in Linux desktop window.

Linux is fundamentally a command line-based operating system, so although you can use the GUI interface with many applications, it is essential you get used to issuing commands through the Terminal interface to improve your work efficiency.

Make sure you have an open Terminal as per the instructions above. Note that a Terminal is a text-based interface, so generally the mouse is not much use. You need to get used to taking your hand off the mouse and letting go of it. Move it away, out of reach. You can then get used to using both hands to type into a Terminal client.

[chechlmy@bear-pg0210u07a ~]$ as shown above in the Terminal Client is known as the system prompt. The prompt usually identifies the user and the system hostname. You can type commands at the system prompt (press the Enter key after each command to make it run). The system then returns output based on your command to the same Terminal.

Try typing ls in the Terminal.

This command tells Linux to print a list of the current directory contents. We will get back later to basic Linux commands, which you should learn to use BlueBEAR for neuroimaging data analysis.

Why bother with Linux?

You may wonder why you should invest the time to learn the names of the various commands needed to copy files, change directories and to do general things such as run image analysis programs via the command line. This may seem rather clunky. However, the commands you learn to run on the command line in a terminal can alternatively be written in a text file. This text file can then be converted to a batch script that can be run on data sets using the BlueBEAR cluster, potentially looping over hundreds or thousands of different analyses, taking many days to run. This is vastly more efficient and far less error prone than using equivalent graphical tools to do the same thing, one at a time.

When you open a new terminal window it opens in a particular directory. By default, this will be your home directory:

/rds/homes/x/xxx

or the Desktop folder in your home directory:

/rds/homes/x/xxx/Desktop(where x is the first letter of your last name and xxx is your University of Birmingham ADF username).

On BlueBEAR files are stored in directories (folders) and arranged into a tree hierarchy.

Examples of directories on BlueBEAR include:

Directory separators on Linux and Windows

/ (forward slash) is the Linux directory separator. Note that this is different from Windows (where the backward slash \\ is the directory separator).

The current directory is always called . (i.e. a single dot).

The directory above the current directory is always called .. (i.e. dot dot)

Your home directory can always be accessed using the shortcut ~ (the tilde symbol). Note that this is the same as /rds/homes/x/xxx.

You need to remember this to use and understand basic Linux Commands.

"},{"location":"workshop1/intro-to-linux/#basic-linux-commands","title":"Basic Linux Commands","text":"

pwd (Print Working Directory)

In a Terminal type pwd followed by the return (enter) key to find out the name of the directory where you are. You are always in a directory and can (usually) move to directories above you or below to subdirectories.

For example if you type pwd in your terminal you will see: /rds/homes/x/xxx (e.g., /rds/homes/c/chechlmy)

cd (Change Directory)

In a Terminal window, type cd followed by the name of a directory to gain access to it. Keep in mind that you are always in a directory and normally are allowed access to any directories hierarchically above or below.

Type in your terminal the examples below:

cd /rds/projects

cd /rds/homes/

cd .. (to change to the directory above using .. shortcut)

To find out where you are now, type pwd:

(answer: /rds)

If the directory is not located above or below the current directory, then it is often less confusing to write out the complete path instead. Try this in your terminal:

cd /rds/homes/x/xxx/Desktop (where x is the first letter of your last name and xxx is your ADF username)

Changing directories with full paths

Note that it does not matter what directory you are in when you execute this command, the directory will always be changed based on the full pathway you specified.

Remember that the tilde symbol ~ is a shortcut for your home directory. Try this:

cd /rds/projects \ncd ~ \npwd\n

You should be now back in your home directory.

ls (List Files)

The ls command (lowercase L, S) allows you to see a summary list of the files and directories located in the current directory. Try this:

cd /rds/projects/c\nls\n

(you should now see a long list of various BEAR RDS projects)

Before moving to the next section, please close your terminal by clicking on \u201cx\u201d in the top right of the Terminal.

cp (Copy files/directories)

The cp command will copy files and/or directories FROM a source TO a destination in the current working directory. This command will create the destination file if it doesn't exist. In some cases, to do that you might need to specify a complete path to a file location.

Here are some examples (please do not type them, they are only examples):

Command Function cp myfile yourfile Basic file copy (in current directory) cp data data_copy Copy a directory (but not sub-directories) cp -r ~fred/data . Recursively copy fred dir to current dir cp ~fred/fredsfile myfile Copy remote file and rename it cp ~fred/* . Copy all files from fred dir to current dir cp ~fred/test* . Copy all files that begin with test e.g. test, test1.txt

In the subsequent workshops we will practice using the cp command. For now, looking at the examples above to understand its usage. There are also some exercises below to check your understanding.

mv, rmdir and mkdir (Moving, removing and making files/directories)

The mv command will move files FROM a source TO a destination. It works like copy, except the file is actually moved. If applied to a single file, this effectively changes the name of the file. (Note there is no separate renaming command in Linux). The command also works on directories.

Here are some examples (again please do not type these in):

Command Function mv myfile yourfile renames file mv ~/data/somefile somefile moves file mv ~/data/somefile yourfile moves and renames mv ~/data/* . moves multiple files

There are also the mkdir and rmdir commands:

You can try these two commands. Open a new Terminal and type:

mkdir testdir\nls\n

In your home directory you will see now a new directory testdir. Now type:

rmdir testdir\nls\n

You should notice that the testdir has been removed from your home directory.

To remove a file you can use the rm command. Note that once files are deleted at the command line prompt in a terminal window, unlike in Microsoft Windows, you cannot get files back from the wastebin.

e.g. rm junk.txt (this is just an example, do not type it in your terminal)

Clearing your terminal

Often when running many commands, your terminal will be full and difficult to understand. To clear the terminal screen type clear. This is an especially helpful command when you have been typing lots of commands and need a clean terminal to help you focus.

Linux commands in general

Note that most commands in Linux have a similar syntax: command name [modifiers/options] input output

The syntax of the command is very important. There must be spaces in between the different parts of the command. You need to specify input and output. The modifiers (in brackets) are optional and may or may not be needed depending on what you want to achieve.

For example, take the following command:

cp -r /rds/projects/f/fred/data ~/tmp (This is an example, do not type this)

In the above example -r is an option meaning 'recursive' often used with cp and other commands, used in this case to copy a directory including all its content from one directory to another directory.

"},{"location":"workshop1/intro-to-linux/#opening-fsl-on-the-bluebear-gui","title":"Opening FSL on the BlueBEAR GUI","text":"

FSL (FMRIB Software Library) is a software library containing multiple tools for processing, statistical analyses, and visualisation of magnetic resonance imaging (MRI) data. Subsequent workshops will cover usage of some of the FSL tools for structural, functional and diffusion MRI data. This workshop only covers how to start FSL app on BlueBEAR GUI Linux desktop, and some practical aspects of using FSL, specifically running it in the terminal either in the foreground or in the background.

There are several different versions of FSL software available on BlueBEAR. You can search which versions of FSL are available on BlueBEAR as well as all other available software using the following link: https://bear-apps.bham.ac.uk

From there you will also find information how to load different software. Below you will find an example of loading one of the available versions of FSL.

To open FSL in terminal, you first need to load the FSL module. To do this, you need to type in the Terminal a specific command.

First, either close the Terminal you have been previously using and open a new one, or simply clean it. Next, type:

module load FSL/6.0.5.1-foss-2021a

You will see various processes running the terminal. Once these have stopped and you see a system prompt in the terminal, type:

fsl

This fsl command will initiate the FSL GUI as shown below.

Now try typing ls in the same terminal window and pressing return.

Notice how nothing appears to happen (your keystrokes are shown as being typed in but no actual event seems to be actioned). Indeed, nothing you type is being processed and the commands are being ignored. That is because the fsl command is running in the foreground in the terminal window. Because of that it is blocking other commands from being run in the same terminal.

Notice now that control has been returned to the Terminal and how commands you type are now being acted on. Try typing ls again; it should now work in the Terminal.

Notice that your new commands are now being processed. The fsl command is now running in the background in the Terminal allowing you to run other commands in parallel from the same Terminal. Typing the & after any command makes it run in the background and keeps the Terminal free for you to use.

Sometimes you may forget to type & after a command.

You should get a message like \u201c[1]+ Stopped fsl\u201d. You will notice that the FSL GUI is now unresponsive (try clicking on some of the buttons). The fsl process has been suspended.

You should find the FSL GUI is now responsive again and input to the terminal now works once more. If you clicked the 'Exit' button when the FSL GUI was unresponsive, FSL might close now.

Running and killing commands in the terminal

If, for some reason, you want to make the command run in the foreground then rather than typing bg (for background) instead type fg (for foreground). If you want to kill (rather than suspend) a command that was running in the foreground, press CTRL-c (CTRL key and c key).

Linux: some final useful tips

TIP 1:

When typing a command - or the name of a directory or file - you never need to type everything out. The terminal will self-complete the command or file name if you type the TAB key as you go along. Try using TAB key when typing commands or complete path to specific directory.

TIP 2:

If you need help understanding what the options are, or how to use a command, try adding --help to the end of your command. For example, for better understanding of the du options, type:

du --help [enter]

TIP 3:

There are many useful online lists of these various commands, for example: www.ss64.com/bash

Exercise: Basic Linux commands

Please complete the following exercises, you should hopefully know which Linux commands to use!

If unsure, check your results with someone else or ask for help!

The correct commands are provided below. (click to reveal)

Linux Commands Exercise (Answers)
  1. clear

  2. cd ~ or cd /rds/homes/x/xxx

  3. pwd

  4. mkdir test

  5. mv test test1 mkdir test2

  6. cp -r test1 /rds/projects/c/chechlmy-chbh-mricn/xxx/ or mv test1 /rds/projects/c/chechlmy-chbh-mricn/xxx/

  7. rm -r test1 test2 ls

Workshop 1: Further Reading and Reference Material

Here are some additional resources that introduce users to Linux:

"},{"location":"workshop1/workshop1-intro/","title":"Workshop 1 - Introduction to BlueBEAR and Linux","text":"

Welcome to the first workshop of the MRICN course!

In this workshop we will introduce you to the environment where will run all of our analyses: the BlueBEAR Portal. Whilst you can run neuroimaging analyses on your own device, the various software and high computing resources needed necessitates a specifically-curated environment. At the CHBH we use BlueBEAR, which satisfies both of these needs. You will also be introduced to the Linux operating system (OS), which is the OS supported on the BlueBEAR Portal through Ubuntu. Linux OS is similar to other operating systems such as Mac OS and Windows, and can similarly be navigated by using terminal commands. Learning how to do so is key when working with data on Linux, and - as you will see in future workshops - is particularly useful when creating and running scripts for more complex data analyses.

Overview of Workshop 1

Topics for this workshop include:

Pre-requisites for the workshop

Please ensure that you have completed the 'Setting Up' section of this course, as you will require access to the BEAR Portal for this workshop.

"},{"location":"workshop2/mri-data-formats/","title":"Working with MRI Data - Files and Formats","text":"MRI Image Fundamentals

When you acquire an MRI image of the brain, in most cases it is either a 3D image i.e., a volume acquired at one single timepoint (e.g., T1-weighted, FLAIR scans) or a 4D multi-volume image acquired as a timeseries (e.g., fMRI scans). Each 3D volume consists of multiple 2D slices, which are individual images.

The volume consists of 3D voxels, with a typical size between 0.25 to 4mm, but not necessarily same in all three directions. For example, you can have voxel size [1mm x 1mm x 1mm] or [0.5mm x 0.5mm x 2mm]. The voxel size represents image resolution.

The final important feature of an MRI image is field of view (FOV), a matrix of voxels represented as the voxel size multiplied by number of voxels. It provides information about the coverage of the brain in your MRI image. The FOV is sometime provided for the entire 3D volume or the individual 2D slice. Sometimes, the FOV is defined based on slice thickness and number of acquired slices.

Image and standard space

When you acquire MRI images of the brain, you will find that these images will be different in terms of head position, image resolution and FOV, depending on the sequence and data type (e.g., T1 anatomical, diffusion MRI, fMRI). We often use term \u201cimage space\u201d to depict these differences i.e., structural (T1), diffusion or functional space.

In addition, we also use term \"standard space\" to represent standard dimensions and coordinates of the template brain, which are used when reporting results of group analyses. Our brains differ in terms of size and shape and thus for the purpose of our analyses (both single-subject and group-level) we need to use standard space. The most common brain template is the MNI152 brain (an average of 152 healthy brains).

The process of alignment between different image spaces is called registration or normalization, and its purpose is to make sure that voxel and anatomical locations correspond to the same parts of the brain for each image type and/or participant.

"},{"location":"workshop2/mri-data-formats/#mri-data-formats","title":"MRI Data Formats","text":"

MRI scanners collect MRI data in an internal format that is unique to the scanner manufacturer, e.g., Philips, Siemens or GE. The manufacturer then allows you to export the data into a more usable intermediate format. We often refer to this intermediate format as raw data as it is not directly usable and needs to be converted before being accessible to most neuroimaging software packages.

The most common format used by various scanner manufacturers is the DICOM format. DICOM images corresponding to a single scan (e.g., a T1-weighted scan) might be one large file or multiple files (1 per each volume or one per each slice acquired). This will depend on the scanner and data server used to retrieve/export data from the scanner. There are other data formats e.g., PAR/REC that are specific to Philips scanners. The raw data needs to be converted into a format that the analysis packages can use.

Retrieving MRI data at the CHBH

At CHBH we have a Siemens 3T PRISMA scanner. When you acquire MRI scans at CHBH, data is pushed directly to a data server in the DICOM format. This should be automatic for all research scans. In addition, for most scans, this data is also directly converted to NIfTI format. So, at the CHBH you will likely retrieve MRI data from the scanner in NIfTI format.

NIfTI (Neuroimaging Informatics Technology Initiative) is the most widely used format for MRI data, accessible by majority of the neuroimaging software packages e.g., FSL or SPM. Another older data format which is still sometimes used, is Analyze (with each image consisting of two files .img and .hdr).

NIfTI format files have either the extension .nii or .nii.gz (compressed .nii file), where there is only one NIfTI image file per scan. DICOM files usually have a suffix of .dcm, although these files might be additionally compressed with gzip as .dcm.gz files.

"},{"location":"workshop2/mri-data-formats/#working-with-mri-data","title":"Working with MRI Data","text":"

We will now ourselves convert some DICOM images to NIfTI, using some data collected at the CHBH.

Servers do not always provide MRI data as NIfTIs

While at CHBH you can download the MRI data in NIfTI format, this might not be the case at some other neuroimaging centres. Thus, you should learn how to do it yourself.

The data is located in /rds/projects/c/chechlmy-chbh-mricn/module_data/CHBH.

First, log in into the BlueBEAR Portal and start a BlueBEAR GUI session (2 hours). Open a new terminal window and navigate to your MRICN project folder:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx [where XXX=your ADF username]

Next copy the data from CHBH scanning sessions:

cp -r /rds/projects/c/chechlmy-chbh-mricn/module_data/CHBH .\npwd\n

After typing pwd, the terminal should show /rds/projects/c/chechlmy-chbh-mricn/xxx (i.e., you should be inside your MRICN project folder).

Then type:

cd CHBH \nls\n

You should see data from 3 scanning sessions. Note that there are two files per scan session. One is labelled XXX_dicom.zip. This contains the DICOM files of all data from the scan session. The other file is labelled XXX_nifti.zip. This contains the NIFTI files of the same data, converted from DICOM.

In general, both DICOM and NifTI data should be always copied from the server and saved by the researcher after each scan session. The DICOM file is needed in case there are problems with the automatic conversion to NIfTI. However, most of the time the only file you will need to work with is the XXX_nifti.zip file containing NIfTI versions of the data.

We will now unpack some of the data to explore the data structure. In your terminal, type:

unzip 20191008#C4E7_nifti.zip\ncd 20191008#C4E7_nifti\nls\n

You should see six files listed as below, corresponding to 3 scans (two fMRI scans and one structural scan):

2.5mm_2000_fMRI_v1_6.json \n2.5mm_2000_fMRI_v1_6.nii.gz \n2.5mm_2000_fMRI_v1_7.json \n2.5mm_2000_fMRI_v1_7.nii.gz \nT1_vol_v1_5.json \nT1_vol_v1_5.nii.gz \n

JSON files

You may have noticed that for each scan file (NifTI file, .nii.gz), there is also an autogenerated .json file. This is an information file (in an open standard format) that contains important information for our data analysis. For example, the 2.5mm_2000_fMRI_v1_6.json file contains slice timing information about the exact point in time during the 2s TR (repetition time) when each slice is acquired, which can be used later in the fMRI pre-processing. We will come back to this later in the course.

For now, let's look at another dataset. In your terminal type:

cd ..\nunzip 20221206#C547_nifti.zip\ncd 20221206#C547_nifti\nls\n

You should now see a list of 10 files, corresponding to 3 scans (two diffusion MRI scans and one structural scan). For each diffusion scan, in addition to the .nii.gz and .json files, there are two additional files, .bval and .bvec that contain important information about gradient strength and gradient directions (as mentioned in the MRI physics lecture). These two files are also needed for later analysis (of diffusion MRI data).

We will now look at a method for converting data from the DICOM format to NIfTI.

cd ..\nunzip 20191008#C4E7_dicom.zip\ncd 20191008#C4E7_dicom\nls\n

You should see a list of 7 sub-directories. Each top level DICOM directory contains sub-directories with each individual scan sequence. The structure of DICOM directories can vary depending on how it is stored/exported on different systems. The 7 sub-directories here contain data for four localizer scans/planning scans, two fMRI scans and one structural scan. Each sub-directory also contains several .dcm files.

There are several software packages which can be used to convert DICOM to NIfTI, but dcm2niix is the most widely used. It is available as standalone software, or part of MRIcroGL, a popular tool for brain visualization similar to FSLeyes. dcm2niix is available on BlueBEAR, but to use it you need to load it first using the terminal.

To do this, in the terminal type:

module load bear-apps/2022b

Wait for the apps to load and then type:

module load dcm2niix/1.0.20230411-GCCcore-12.2.0

To convert the .dcm files in one of the sub-directories to NIfTI using dcm2niix from terminal, type:

dcm2niix T1_vol_v1_5

If you now check the T1_vol_v1_5 sub-directory, you should find there a single .nii file and a .json file.

Converting more MRI data

Now try to convert to NIfTI the .dcm files from the scanning session 20221206#C547 with 3 DICOM sub-directories, the two diffusion scans diff_AP and diff_PA and one structural scan MPRAGE.

To do this, you will first need to change the current directory, unzip, change the directory again and then run the dcm2niix command as above.

If you have done it correctly you will find .nii and .json files generated in the structural sub-directories, and in the diffusion sub-directories you will also find .bval and .bvec files.

Now that we have our MRI data in the correct format, we will take a look at the brain images themselves using FSLeyes.

"},{"location":"workshop2/visualizing-mri-data/","title":"MRI data visualization with FSLeyes","text":"

FSL (FMRIB Software Library) is a comprehensive neuroimaging software library for the analysis of structural and functional MRI data. FSL is widely used, freely available, runs on both Linux and Mac OS as well as on Windows via a Virtual Machine.

FSLeyes is the FSL viewer for 3D and 4D data. FSLeyes is available on BlueBEAR, but you need to load it first. You can just load FSLeyes as a standalone software, but as it is often used with other FSL tools, you often want to load both (FSL and FSLeyes).

In this session we will only be loading FSLeyes by itself, and not with FSL.

FSL Wiki

Remember that the FSL Wiki is an important source for all things FSL!

"},{"location":"workshop2/visualizing-mri-data/#getting-started-with-fsleyes","title":"Getting started with FSLeyes","text":"

Assuming that you have started directly from the previous page, first close your previous terminal (to close dcm2nii). Then open a new terminal and to navigate to the correct folder, type in your terminal:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/CHBH

To open FSLeyes, type:

module load FSL/6.0.5.1-foss-2021a-fslpython

There are different versions of FSL on BlueBEAR, however this is the one which you need to use it together with FSLeyes.

Wait for FSL to load and then type:

module load FSLeyes/1.3.3-foss-2021a

Again, wait for FSLeyes to load (it may take a few minutes). After this, to open FSLeyes, type in your terminal:

fsleyes &

The importance of '&'

Why do we type fsleyes & instead of fsleyes?

You should then see the setup below, which is the default FSLeyes viewer without an image loaded.

You can now load/open an image to view. Click 'File' \u2192 'Add from file' (and then select the file in your directory e.g., rds/projects/c/chechlmy-chbh-mricn/xxx/CHBH/visualization/T1.nii).

You can also type directly in the terminal:

fsleyes file.nii.gz

where you replace file.nii.gz with the name of the actual file you want to open.

However, you will need to include the full path to the file if you are not in the same directory when you open the terminal window e.g. fsleyes rds/projects/c/chechlmy-chbh-mricn/xxx/CHBH/visualization/T1.nii

You should now see a T1 scan loaded in ortho view with three canvases corresponding to the sagittal, coronal, and axial planes.

Please now explore the various settings in the ortho view panel:

Also notice the abbreviations on the three canvases:

FSL comes with a collection of\u00a0NIFTI standard templates, which are used for image registration and normalisation (part of MRI data analysis). You can also load these templates in FSLeyes.

To load a template, Click 'File' \u2192 'Add Standard' (for example select the file named MNI152_T1_2mm.nii.gz. If you still have the T1.nii image open, first close this image (by selecting 'Overlay' \u2192 'Remove') and then load the template.

The image below depicts the various tools that you can use on FSLeyes, give them a go!

We will now look at fMRI data. First close the previous image ('Overlay' \u2192 'Remove') and then load the fMRI image. To do this, click 'File' \u2192 'Add from file' and then select the file rds/projects/c/chechlmy-chbhmricn/xxx/CHBH/visualization2.5mm_2000_fMRI.nii.gz.

Your window should now look like this:

Remember this fMRI data file is a 4D image \u2013 a set of 90-odd volumes representing a timeseries. To cycle through volumes, use the up/down buttons or type in a volume in the 'Volume' box to step through several volumes.

Now try playing the 4D file in 'Movie' mode by clicking this button. You should see some slight head movement over time. Click the button again to stop the movie.

As the fMRI data is 4D, this means that every voxel in the 3D-brain has a timecourse associated with it. Let's now have a look at this.

Keeping the same dataset open (2.5mm_2000_fMRI.nii.gz) and now in the FSLeyes menu, select 'View' \u2192 'Time series'.

FSLeyes should now look like the picture below.

What exactly are we looking at?

The functional image displayed here is the data straight from the scanner, i.e., raw, un-preprocessed data that has not been analyzed. In later workshops we will learn how to view analyzed data e.g., display statistical maps etc.

You should see a timeseries shown at the bottom of the screen corresponding to the voxel that is selected in the main viewer. Move the mouse to select other voxels to investigate how variable the timecourse is.

Within the timeseries window, hit the '+' button to show the 'Plot List' characteristics for this timeseries.

Compare the timeseries in different parts of the brain, just outside the brain (skull and scalp), and in the airspace outside the skull. You should observe that these have very different mean intensities.

The timeseries of multiple different voxels can be compared using the '+' button. Hit '+' and then select a new voxel. Characteristics of the timeseries such as plotting colour can also be changed using the buttons on the lower left of the interface.

"},{"location":"workshop2/visualizing-mri-data/#atlas-tools","title":"Atlas tools","text":"

FSL comes not only with a collection of\u00a0NIFTI standard templates but also with several built-in atlases, both probabilistic and histological (anatomical), comprising cortical, sub-cortical, and white matter parcellations. You can explore the full list of included atlases here.

We will now have a look at some of these atlases.

Firstly, close all open files in FSLeyes (or close FSLeyes altogether and start it up again in your terminal by running fsleyes &).

In the FSLeyes menu, select 'File' \u2192 'Add Standard' and then choose the file called MNI152_T1_2mm.nii.gz (this is a template brain in MNI space).

The MNI152 atlas

Remember that the MNI152 atlas is a standard brain template created by averaging 152 MRI scans of healthy adults widely used as a reference space in neuroimaging research.

Now select from the menu 'Settings' \u2192 'Ortho View 1' and tick the box for 'Atlases' at the bottom.

You should now see the 'Atlases' panel open as shown below.

The 'Atlases' panel is organized into three sections:

The 'Atlas information' tab provides information about the current display location, relative to one or more atlases selected in this tab. We will soon see how to use this information.

The 'Atlas search' tab can be used to search for specific regions by browsing through the atlases. We will later look how to use this tab to create region-of-interest (ROI) masks.

The 'Atlas management' tab can be used to add or delete atlases. This is an advanced feature, and we will not be using it during our workshops.

We will now have a look at how to work with FSL atlases. First we need to choose some atlases to reference. In the 'Atlases' \u2192 'Atlas Information' window (bottom of screen in middle panel) make sure the following are ticked:

Now let's select a point in the standard brain. Move the cursor to the voxel position: [x=56, y=61, z=27] or enter the voxel location in the 'Location' window (2nd column).

MNI Co-ordinate Equivalent

Note that the equivalent MNI coordinates (shown in the 1st column/Location window) are [-22,-4,-18].

It may not be immediately obvious what part of the brain you are looking at. Look at the 'Atlases' window. The report should say something like:

Harvard-Oxford Cortical Structural Atlas \nHarvard-Oxford Subcortical Structural Atlas \n98% Left Amygdala\n

Checking the brain region with other atlases

What do the Juelich Histological Atlas & Talairach Daemon Labels report?

The Harvard-Oxford and Juelich are both probabilistic atlases. They report the percentage likelihood that the area named matches the point where the cursor is.

The Talairach Atlas is a simpler labelling atlas. It is based on a single brain (of a 60-year-old French woman) and is an example of a deterministic atlas. it reports the name of the nearest label to the cursor coordinates.

From the previous reports, particularly the Harvard-Oxford Subcortical Atlas and the Juelich Atlas, it should be obvious that we are most likely in the left amygdala.

Now click the '(Show/Hide)' link after the Left Amygdala result (as shown below):

This shows the (max) volume that the probabilistic Harvard-Oxford Subcortical Atlas has encoded for the Left Amygdala. The cursor is right in the middle of this volume.

In the 'Overlay list' click and select the top amygdala overlay. You will note that the min/max ranges are set to 0 and 100. If it\u2019s not, change it to 0 and 100. These reflect the % likelihood of the labelling being correct.

If you increase the min value from 0% to 50%, then you will see the size of the probability volume for the left amygdala will decrease.

It now shows only the volume where there is a 50% or greater probability that this label is correct.

Click the (Show/Hide) link after the Left Amygdala; the amygdala overlay will disappear.

Exercise: Coordinate Localization

Have a go at localizing exactly what the appropriate label is for these coordinates:

If unsure check your results with someone else, or ask for help!

Before continuing

Make sure all overlays are closed (but keep the MNI152_T1_2mm.nii.gz open) before moving to the next section.

"},{"location":"workshop2/visualizing-mri-data/#using-atlas-tools-to-find-a-brain-structure","title":"Using atlas tools to find a brain structure","text":"

It is often helpful to locate where a specific structure is in the brain and to visually assess its size and extent.

Let's suppose we want to visualize where Heschl's Gyrus is. In the bottom 'Atlases' window, click on the second tab ('Atlas search').

In the Search box, start typing the word 'Heschl\u2026'. You should find that the system quickly locates an entry for Heschl's Gyrus in the Harvard-Oxford Cortical Atlas. Click on it to select.

Now if you now the tick box immediately below next to the Heschl's Gyrus, an overlay will be added to the 'Overlay' list on the bottom (see below). Heschl's Gyrus should now be visible in the main image viewer.

Now click on the '+' button next to the tick box. This will centre the viewing coordinates to be in the middle of the atlas volume (see below).

Exercise: Atlas visualization

Now try the following exercises for yourself:

You can change the colour of the overlays by selecting the option below:

Other options also exist to help you navigate the brain and recognize the different brain structures and their relative positions.

Make sure you have firstly closed/removed all previous overlays. Now, select the 'Atlas Search' tab in the 'Atlases' window again. This time, in the left panel listing different atlases, tick on the option for only one of the atlases, such as the Harvard-Oxford Cortical Structural Atlas, and make sure all others are unticked.

Now you should see all of the areas covered by the Harvard-Oxford cortical atlas shown on the standard brain. You can click around with the cursor, the labels for the different areas can be seen in the bottom right panel.

In addition to atlases covering various grey matter structures, there are also two white matter atlases: the JHU ICBM-DTI-81 white-matter labels atlas & JHU white-matter tractography atlas. If you tick (select) these atlases as per previous instructions (hint using the 'Atlas search' tab), you will see a list of all included white matter tracts (pathways) as shown below:

"},{"location":"workshop2/visualizing-mri-data/#using-atlas-tools-to-create-a-region-of-interest-mask","title":"Using atlas tools to create a region-of-interest mask","text":"

You can also use atlas tools in FSLeyes to not only locate specific brain structures but also to create masks for ROI (region-of-interest) analysis. We will now create ROI masks (one grey matter mask and one white matter) using FSL tools and built-in atlases.

To start, please close 'FSLeyes' entirely, either by clicking 'x' in the right corner of the FSLeyes window or by selecting 'FSLeyes' \u2192 'Close'. Then close your current terminal and open a new terminal window.

Then do the following:

Here are the commands to do this:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/\nmkdir ROImasks\ncd ROImasks\nmodule load FSL/6.0.5.1-foss-2021a-fslpython \nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes & \n

Wait for FSLeyes to load, then:

You should now see the MFG overlay in the overlay list (as below) and have a MFG.nii.gz file in the ROImasks directory. You can check this by typing ls in the terminal.

We will now create a white matter mask. Here are the steps:

You should now see the FM overlay in the overlay list (as below) and also have a FM.nii.gz file in the ROImasks directory.

You now have two \u201cprobabilistic ROI masks\u201d. To use these masks for various analyses, you need to first binarize these images.

Why binarize?

Why do you think we need to binarize the mask first? There are several reasons, but primarily it creates clear boundaries between regions which simplifies our statistical analysis and reduces computation.

To do this, first close FSLeyes. Make sure that you are in the ROImasks directory and check if you have the two masks. If you type pwd in the terminal, you should get the output rds/projects/c/chechlmy-chbh-mricn/xxx/ROImasks (where XXX=your ADF username) and when you type ls, you should see FM.nii.gz and MFG.nii.gz.

To binarize the masks, you can use one of the FSL tools for image manipulation, fslmaths. The basic structure of an fslmaths command is:

fslmaths input image [modifiers/options] output

Type in your terminal:

fslmaths FM.nii.gz -bin FM_binary\nfslmaths MFG.nii.gz -bin MFG_binary\n

This simply takes your ROI mask, binarizes it and saves the binarized mask with the _binary name.

You should now have 4 files in the ROImasks directory.

Now open FSLeyes and examine one of the binary masks you just created. First load a template (Click 'File' \u2192 'Add Standard' \u2192 'MNI152_T1_2mm') and add the binary mask (e.g., Click 'File' \u2192 'Add from file' \u2192 'FM_binary.nii.gz').

You can see the difference between the probabilistic and binarized ROI masks below:

Probabilistic ROI mask

Binary ROI mask

To use ROI masks in your analysis, you might also need to threshold it i.e., to change/restrict the probability of the volume. We previously did this for the amygdala manually (e.g., from 0-100% to 50%-100%). The choice of the threshold might depend on the type of analysis and the type of ROI mask you need to use. The instructions below explain how to threshold and binarize your ROI image in one single step using fslmaths.

Open your terminal and make sure that you are in the ROImasks directory (pwd). To both threshold and binarize the MFG mask, type:

fslmaths MFG.nii.gz -thr 25 -bin MFGthr_binary

(option -thr is used to threshold the image below a specific number, in this case 25 corresponding to 25% probability)

Now let's compare the thresholded and unthresholded MFG binarized masks.

You can see the difference in size between the two below:

Binarized MFG mask

Binarized and thresholded MFG mask

Exercise: Atlases and masks

Have a go at the following exercises:

If unsure, check your results with someone else or ask for help!

Workshop 2: Further Reading and Reference Material

FSLeyes is not the only MRI visualization tool available. Here are some others:

More details of what is available on BEAR at the CHBH can be found at the BEAR Technical Docs website.

"},{"location":"workshop2/workshop2-intro/","title":"Workshop 2 - MRI data formats, data visualization and atlas tools","text":"

Welcome to the second workshop of the MRICN course! Prior lectures introduced you to the basics of the physics and technology behind MRI data acquisition. In this workshop we will explore, MRI image fundamentals, MRI data formats, data visualization and atlas tools.

Overview of Workshop 2

Topics for this workshop include:

You will need this information before you can analyse data, regardless if using structural or functional MRI data.

For the purpose of the module we will be using BlueBEAR. You should remember from Workshop 1, how to access the BlueBEAR Portal and use the BlueBEAR GUI.

You have already been given access to the RDS project, rds/projects/c/chechlmy-chbh-mricn. Inside the module\u2019s RDS project, you will find that you have a folder labelled xxx (xxx = University of Birmingham ADF username).

If you navigate to that folder (rds/projects/c/chechlmy-chbh-mricn/xxx), you will be able to perform the various file operations from there during workshops.

"},{"location":"workshop3/diffusion-intro/","title":"Diffusion MRI basics - visualization and preprocessing","text":"

In this workshop and the workshop next week, we will follow some basic steps in the diffusion MRI analysis pipeline below. The instructions here are specific to tools available in FSL, however other neuroimaging software packages can be used to perform similar analyses. You might also recall from lectures that models other than diffusion tensor and methods other than probabilistic tractography are also often used.

FSL diffusion MRI analysis pipeline

First, if you have not already, log in into the BlueBEAR Portal and start a BlueBEAR GUI session (2 hours). You should know how to do it from the previous workshops. Open a new terminal window and navigate to your MRICN project folder:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx [where XXX=your ADF username]

Please check your directory by typing pwd. This should return: /rds/projects/c/chechlmy-chbh-mricn/xxx.

Where has all my data gone?

Before this workshop, any old directories and files from previous workshops have been removed (you will not need it for subsequent workshops and storing unnecessary data would result in exceeding allocated quota). Your XXX directory should therefore be empty.

Next you need to copy over the data for this workshop.

cp -r /rds/projects/c/chechlmy-chbh-mricn/module_data/diffusionMRI/ . (make sure you do not omit spaces and .)

This might take a while, but once it has completed, change into that downloaded directory:

cd diffusionMRI (your XXX subdirectory you should now have the folder diffusionMRI)

Type ls. You should now see three subdirectories/folders (DTIfit, TBSS and tractography). Change into the DTIfit folder:

cd DTIfit

"},{"location":"workshop3/diffusion-intro/#viewing-diffusion-data-using-fsleyes","title":"Viewing diffusion data using FSLeyes","text":"

We will first look at what diffusion images look like and explore text files which contain information about gradient strength and gradient directions.

In your terminal type ls. This should return:

p01/\np02/\n

So, the folder DTIfit contains data from two participants contained within the p01 and p02 folders.

Inside each folder (p01 and p02) you will find a T1 scan, uncorrected diffusion data (blip_up.nii.gz, blip_down.nii.gz) acquired with two opposing PE-directions (AP/blip_up and PA/blip_down) and corresponding bvals (e.g., blip_up.bval) and bvecs (e.g., blip_up.bvec) files.

The number of entries in bvals and bvecs files equals the number of volumes in the diffusion data files.

Finally, inside p01 and p02 there is also subdirectory data with distortion-corrected diffusion images.

We will start with viewing the uncorrected data. Please navigate inside the p01 folder, open FSLeyes and then load one of the uncorrected diffusion images:

cd p01\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes &\n

The image you have loaded is 4D and consists of 64 volumes acquired with different diffusion encoding directions. Some of the volumes are non-diffusion images (b-value = 0), while most are diffusion weighted images. The first volume, which you can see after loading the file, is a non-diffusion weighted image as demonstrated below.

Viewing separate volumes

You can view the separate volumes by changing the number in the Volume box or playing movie mode. Note that the volume count starts from 0. You should also note that there are significant differences in the image intensity between different volumes.

Now go back to volume 0 and - if needed - stop movie mode. In the non-diffusion weighted image, the ventricles containing CSF are bright and the rest of the image is relatively dark. Now change the volume number to 2, which is a diffusion weighted image (with a b-value of approximately 1500).

The intensity of this volume is different. To see anything, please change max. intensity to 400. Now the ventricles are dark and you can see some contrast between different voxels.

Let's view the content of the bvals and bvecs files by using the cat command. In your terminal type:

cat blip_down.bval

The first number is 0. This indicates that indeed the first volume (volume 0) is a non-diffusion weighted image and the third volume (volume 2) is diffusion weighted volume with b=1500. Based on the content of this bval file, you should be able to tell how many diffusion-weighted volumes were acquired and how many without any diffusion weighting (b0 volumes).

Comparing diffusion-weighted volumes

Please compare this with the file you loaded into FSLeyes.

Now type:

cat blip_down.bvec

You should now see 3 separate rows of numbers representing the diffusion encoding directions (3x1 vector for each acquired volume; x,y,z directions) and that for volume 2 the diffusion encoding is represented by the vector [0.578, 0.671, 0.464].

Distortion correction

As explained in the lectures, diffusion imaging suffers from various distortions (susceptibility, eddy-currents and movement induced distortions). These need to be corrected before further analysis. The most most noticeable geometric distortions are susceptibility-induced distortions caused by field inhomogeneities, and so we will have a closer look at these.

All types of distortions need correction during pre-processing steps in diffusion imaging analysis. FSL includes two tools used for distortion correction, topup and eddy. The processing with these two tools is time and computing intensive. Therefore we will not run the distortion correction steps in the workshop but instead explore some principles behind it.

For this, you are given distortion corrected data to conduct further analysis, diffusion tensor fitting and probabilistic tractography.

First close the current image in FSLeyes ('Overlay' \u2192 'Remove') and load both uncorrected images (blip_up.nii.gz, blip_down.nii.gz) acquired with two opposing PE-directions (PE=phase encoding).

Compare the first volumes in each file. To do that you can either toggle the visibility on and off (click the eye icon) or use the 'Opacity' button (you should remember from the previous workshop how to do this).

The circled area indicates the differences in susceptibility-induced distortions between the two images acquired with two opposing PE-directions.

Now change the max. intensity to 400 and compare the third volumes in each file. Again, the circled area indicate the differences in distortions between the two images acquired with the two opposing PE-directions.

Finally, we will look at distortion corrected data. First close the current image ('Overlay' \u2192 'Remove').

Now in FSLeyes load data.nii.gz (the distortion-corrected diffusion image located inside the data subdirectory) and have a look at one of the the non-diffusion weighted and diffusion-weighted volumes.

Comparing corrected to uncorrected diffusion-weighted volumes

Can you tell the difference in the corrected compared to the uncorrected diffusion images?

Further examining the difference between uncorrected and corrected diffusion data

In your own time (outside of this workshop as part of independent study), load both the corrected and uncorrected data for p01 and compare using the 'Volume' box or 'Movie' mode. Also explore the data in p02 folder using the instructions above.

"},{"location":"workshop3/diffusion-intro/#creating-a-binary-mask-using-fsls-brain-extraction-tool","title":"Creating a binary mask using FSL's Brain Extraction Tool","text":"

In the next part of the workshop, we will look FSL's Brain Extraction Tool (BET).

Brain extraction is a necessary pre-processing step, which removes non-brain tissue from the image. It is applied to structural images prior to tissue segmentation and is needed to prepare anatomical scans for registration of functional MRI or diffusion scans to MNI space. BET can be also used to create binary brain masks (e.g., brain masks needed to run diffusion tensor fitting, DTIfit).

In this workshop we will look at only at creating a binary brain mask as required for DTIfit. In subsequent workshops we will look at using BET for removing non-brain tissues from diffusion and T1 scans (\u201cskull-stripping\u201d) in preparation for registration.

First close FSLeyes and to make sure you do not have any processes running in the background, close your current terminal.

Open a new terminal window, navigate to the p02 subdirectory, and load FSL and FSLeyes again:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p02\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a \n

Now check the content of the p02 subdirectory by typing ls. You should get the response bvals, bvecs and data.nii.gz.

From the data.nii.gz (distortion corrected diffusion 4D image) we will extract a single volume without diffusion weighting (e.g. the first volume). You can extract it using one of FSL's utility commands, fslroi.

What is fslroi used for?

fslroiis used to extract a region of interest (ROI) or subset of data from a larger 3D or 4D image file.

In the terminal, type:

fslroi data.nii.gz nodif 0 1

where:

You should have a new file nodif.nii.gz (type ls to confirm) and can now create a binary brain mask using BET.

To do this, first open BET in terminal. You can open the BET GUI directly in a terminal window by typing:

Bet &

Or by runnning FSL in a terminal window and accessing BET from the FSL GUI. To do it this way, type:

fsl &

and then open the 'BET brain extraction tool' by clicking on it in the GUI.

In either case, once BET is opened, click on advanced options and make sure the first two outputs are selected ('brain extracted image' and 'binary brain mask') as below. Select as the 'Input' image the previously created nodif.nii.gz and change 'Fractional Intensity Threshold' to 0.4. Then click the 'Go' button.

Completing BET in the terminal

After running BET you may need to hit return to get a visible prompt back after seeing \"Finished\u201d in the terminal!

You will see 'Finished' in the terminal when you are ready to inspect the results. Close BET and open FSLeyes and load three files (nodif.nii.gz, nodif_brain.nii.gz and nodif_brain_mask). Compare the files. To do that you can either toggle the visibility on and off (click the eye icon) or use 'Opacity button' (you should remember from previous workshop how to do it).

The nodif_brain_mask is a single binarized image with ones inside the brain and zeroes outside the brain. You need this image both for DTIfit and tractography.

Comparing between BET and normal images

Can you tell the difference between nodif.nii.gz and nodif_brain.nii.gz? It might be easier to compare these images if you change max intensity to 1500 and nodif_brain colour to green.

"},{"location":"workshop3/diffusion-mri-analysis/","title":"Diffusion tensor fitting and Tract-Based Spatial Statistics","text":""},{"location":"workshop3/diffusion-mri-analysis/#diffusion-tensor-fitting-dtifit","title":"Diffusion tensor fitting (DTIfit)","text":"

The next thing we will do is to look at how to run and examine the results of diffusion tensor fitting.

First close FSLeyes, and to make sure you do not have any processes running in the background, close the current terminal.

Open a new terminal window, navigate to the p01 subdirectory, load FSL and FSLeyes again, and finally open FSL (with & to background it):

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p01\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a\nfsl & \n

To run the diffusion tensor fit, you need 4 files as specified below:

  1. Distortion corrected diffusion data: data.nii.gz
  2. Binary brain mask: nodif_brain_mask.nii.gz
  3. Gradient directions: bvecs (test file with gradient directions)
  4. b-values: bvals (text file with list of b-values)

All these files are included inside the data subdirectory p01/data. You will later learn how to create a binary brain mask but first we will run DTIfit.

In the FSL GUI, first click on 'FDT diffusion', and in the FDT window, select 'DTIFIT Reconstruct diffusion tensors'. Now choose as 'Input directory' the data subdirectory located inside p01 and click 'Go'.

You should see something happening in the terminal and once you see 'Done!' you are ready to view the results.

Click 'OK' when the message appears.

Different ways of running DTIfit

Instead of running DTIfit by choosing the 'Input' directory, you can also run it by specifying the input file manually. If you click it now, the files would be auto-filled but otherwise you would need to provide inputs as below.

Running DTIfit in your own time

Please do NOT run it now, but instead try it in your own time with data in the p02 folder.

Finally, you can also run DTIfit directly from the terminal. To do this, you would need to type dtifit in the terminal and choose the dtifit compulsory arguments:

Argument Description -k, --data dti data file -o, --out Output basename -m, --mask Bet binary mask file -r, --bvecs b vectors file -b, --bvals b values file

To run DTIfit from the terminal, you would need to navigate inside the subdirectory/folder with all the data and type the full dtifit command, specifying compulsory arguments as below:

dtifit --data=data --mask=nodif_brain_mask --bvecs=bvecs --bvals=bvals --out=dti

This command only works when running it from inside a folder where all the data is located, otherwise you will need to specify the full path with the data location. This would be useful if you want to write a script; we will look at it in the later workshops.

Running DTIfit from the terminal in your own time

Again, please do NOT run it now but try it in your own time with data in the p02 folder.

The results of running DTIfit are several output files as specified below. We will look closer at the highlighted files in bold.

All of these files should be located in the data subdirectory, i.e. within /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p01/data/.

Output File Description dti_V1 (V2, V3) 1st, 2nd, 3rd eigenvectors dti_L1 (L2, L3) 1st, 2nd, 3rd eigenvalues dti_FA Fractional Anisotropy map dti_MD Mean Diffusivity map dti_MO Mode of anisotropy (linear versus planar) dti_SO Raw T2 signal with no diffusion weighting

To do this, firstly close the FSL GUI, open FSLeyes and load the FA map ('File' \u2192 'Add from file' \u2192 dti_FA)

Next add the principal eigenvector map (dti_V1) to your display ('File' \u2192 'Add from file' \u2192 dti_V1).

FSLeyes will open the image dti_V1 as a 3-direction vector image (RGB) with diffusion direction coded by colour. To display the standard PDD colour coded orientation map (as below), you need to modulate the colour intensity with the FA map so that the anisotropic voxels appear bright.

In the display panel (click on 'Settings' (the Cog icon)) and change 'Modulate' by setting it to dti_FA.

Finally, compare the FA and MD maps (dti_FA and dti_MD). To do this, load the FA map and add the MD map. By contrast to the FA map, the MD map appears uniform in both gray and white matter, plus higher intensities are in the CSF-filled ventricles and indicate higher diffusivity. This is opposed to dark ventricles in the FA map.

Differences between the FA and MD maps

Why are there such differences?

"},{"location":"workshop3/diffusion-mri-analysis/#tract-based-spatial-statistics-tbss","title":"Tract-Based Spatial Statistics (TBSS)Tract-Based Spatial Statistics analysis pipeline","text":"

In the next part of the workshop, we will look at running TBSS, Tract-Based Spatial Statistics.

TBSS is used for a whole brain \u201cvoxelwise\u201d cross-subject analysis of diffusion-derived measures, usually FA (fractional anisotropy).

We will look at an example TBSS analysis of a small dataset consisting of FA maps from ten younger (y1-y10) and five older (o1-o5) participants. Specifically, you will learn how to run the second stage of TBSS analysis, \u201cvoxelwise\u201d statistics, and learn how to display results using FSLeyes. The statistical analysis that you will run aims to examine where on the tract skeleton younger versus older (two groups) participants have significantly different FA values.

Before that, let's shortly recap TBSS as it was covered in the lecture.

The steps for Tract-Based Spatial Statistics are:

  1. Fitting the diffusion tensor (DTIfit)
  2. Alignment of all study participants\u2019 FA maps to standard space using non-linear registration
  3. Merging all participants\u2019 nonlinearly aligned FA maps into a single 4D image file and creating the mean FA image
  4. FA \u201cskeletonization\u201d (the mean FA skeleton representing the centres of major tracts specific to all participants is created)
  5. Each participant\u2019s aligned FA map is then projected back onto the skeleton prior to statistical analysis
  6. Hypothesis testing (voxelwise statistics)

To save time, some of the pre-processing stages including generating FA maps (tensor fitting), preparing data for analysis, registration of FA maps and skeletonization have been run for you and all outputs are included in the data folder you have copied at the start of this workshop.

You will only run the TBSS statistical analysis to explore group differences in FA values based upon age (younger versus older participants).

First close FSLeyes (if you still have it open) and make sure that you do not have any processes running in the background by closing your current terminal.

Then open a new terminal window, navigate to the subdirectory where pre-processed data are located and load both FSL and FSLeyes:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/TBSS/TBSS_analysis_p2/\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a \n

Once you have loaded all the required software, we will start with exploring the pre-processed data. If you correctly followed the previous steps, you should be inside the subdirectory TBSS_analysis_p2. Confirm that, and then check the content of that subdirectory by typing:

pwd (answer /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/TBSS/TBSS_analysis_p2/)

ls (you should see 3 data folders listed: FA, origdata, stats)

We need to firstly check if all the pre-processing steps have been run correctly and that we have all the required files.

Navigate inside the stats folder and check the files inside by typing in your terminal:

cd stats\nls\n

You should find inside the files listed below.

Exploring the data

If this is the case, open FSLeyes and explore these files one by one to make sure you understand what each represents. You might need to change the colour to visualise some image files.

Remember to ask for help!

If you are unsure about something, or need help, please ask!

Once you have finished taking a look, close FSLeyes.

Before using the General Linear Model (GLM) GUI to set up the statistical model, you need to determine the order in which participants\u2019 files have been entered into the single 4D skeletonized file (i.e., the data order in the all_FA_skeletonised file). The easiest way to determine the alphabetical order of participants in the the final 4D file (all_FA_skeletonised), is to check in which order FSL lists the pre-processed FA maps inside the FA folder. You can do this in the terminal with the commands below

cd .. \ncd FA \nimglob *_FA.*\n

You should see data from the 5 older (o1-o5) followed by data from the 10 (y1-y10) younger participants.

Next navigate back to the stats folder and open FSL:

cd ..\ncd stats\nfsl &\n

Click on 'Miscellaneous tools' and select 'GLM Setup' to open the GLM GUI.

In the workshop we will set up a simple group analysis (a two sample unpaired t-test).

How to set up more complex models

To find information re how to set up more complex models, using GUI, click on this link: https://fsl.fmrib.ox.ac.uk/fsl/docs/#/statistics/glm

In the 'GLM Setup' window, change 'Timeseries design' to 'Higher-level/non-timeseries design' and '# inputs' to 15.

Then click on 'Wizard' and select 'two groups, unpaired' and set 'Number of subjects in first group' to 5. Then click 'Process'.

In the 'EVs' tab, name 'EV1' and 'EV2' as per your groups (old, young).

In the contrast window set number of contrasts to 2 and re-name them accordingly to the image below:

(C1: old > young, [1 -1]) (C2: young > old, [-1 1])

Click 'View Design', close the image and then go back to the GLM set window and save your design with the filename design. Click 'Exit' and close FSL.

To run the TBSS statistical analysis FSL's randomise tool is used.

FSL's randomise

Randomise is FSL's tool for nonparametric permutation inference on various types of neuroimaging data (statistical analysis tool). For more information click on this link: https://fsl.fmrib.ox.ac.uk/fsl/docs/#/statistics/randomise

The basic command line to use this tool is:

randomise -i <input> -o <input> -d <design.mat> -t <design.con> [options]

You can explore options and the set up by typing randomise in your terminal.

The basic command line to use randomise for TBSS is below:

randomise -i all_FA_skeletonised -o tbss -m mean_FA_skeleton_mask -d design.mat -t design.con -n 500 --T2

Check if you are inside the stats folder and run the command above in terminal to run your TBSS group analysis:

The elements of this command are explained below:

Argument Description -i input image -o output image basename -m mask -d design matrix -t design contrast -n number of permutations --T2 TFCE

Why so few permutations?

To save time we only run 500 permutations; this number will vary depending on the type of analysis, but usually it is between 5,000 to 10,000 or higher.

The output from randomise will include two raw (unthresholded) tstat images, tbss_tstat1 and tbss_tstat2.

The TFCE p-value images (fully corrected for multiple comparisons across space) will be tbss_tfce_corrp_tstat1 and tbss_tfce_corrp_tstat2.

Based on the set up of your design, contrast 1 gives the older > young test and contrast 2 gives the young > older test; the contrast which will likely give significant results is the 2nd contrast i.e., we are expecting higher FA in younger participants (due to the age related decline in FA).

To check that, use FSLeyes to view results of your TBSS analysis. Open FSLeyes, load mean_FA plus the mean_FA_skeleton template and add your display TFCE corrected stats-2 image:

  1. 'File' -> 'Add from file' -> mean_FA.nii.gz
  2. File -> 'Add from file' -> mean_FA_skeleton.nii.gz (change greyscale to green)
  3. File -> 'Add from file' -> tbss_tfce_corrp_tstat2.nii.gz (change greyscale to red-yellow and set up Max to 1, and Min to 0.95 or 0.99)

Please note that TFCE-corrected images, are actually 1-p for convenience of display, so thresholding at 0.95 gives significant clusters at p corrected < 0.05, and 0.99 gives significant clusters at p corrected < 0.01.

You should see the same results as below:

Interpreting the results

Are the results as expected? Why/why not?

Reviewing the tstat1 image

Next review the image tbss_tfce_corrp_tstat1.nii.gz.

Further information on TBSS

More information on TBSS, can be found on the 'TBSS' section of the FSL Wiki: https://fsl.fmrib.ox.ac.uk/fsl/docs/#/diffusion/tbss

"},{"location":"workshop3/workshop3-intro/","title":"Workshop 3 - Basic diffusion MRI analysis","text":"

Welcome to the third workshop of the MRICN course! Prior lectures in the module introduced you to basics of the diffusion MRI and its applications, including data acquisition, the theory behind diffusion tensor imaging and using tractography to study structural connectivity. The aim of the next two workshops is to introduce you to some of the core FSL tools used for diffusion MRI analysis.

Specifically, we will explore different elements of the FMRIB's Diffusion Toolbox (FDT) to walk you through basic steps in diffusion MRI analysis. We will also cover the use of Brain Extraction Tool (BET).

By the end of the two workshops, you should be able to understand the principles of correcting for distortions in diffusion MRI data, how to run and explore results of a diffusion tensor fit, and how to run a whole brain group analysis and probabilistic tractography.

Overview of Workshop 3

Topics for this workshop include:

We will be working with various previously acquired datasets (similar to the data acquired during the CHBH MRI Demonstration/Site visit). We will not go into details as to why and how specific sequence parameters and specific values of the default settings have been chosen. Some values should be clear to you from the lectures or assigned on Canvas readings, please check there, or if you are still unclear, feel free to ask.

Note that for your own projects, you are very likely to want to change some of these settings/parameters depending on your study aims and design.

"},{"location":"workshop4/probabilistic-tractography/","title":"Probabilistic Tractography","text":"

In the first part of the workshop, we will look again at BET, FSL's Brain Extraction Tool.

Brain extraction is a necessary pre-processing step which allows us to remove non-brain tissue from the image. It is applied to structural images prior to tissue segmentation and is needed to prepare anatomical scans for the registration of functional MRI or diffusion scans to MNI space. BET can be also used to create binary brain masks (e.g., brain masks needed to run diffusion tensor fitting, DTIfit).

"},{"location":"workshop4/probabilistic-tractography/#skull-stripping-our-data-using-bet","title":"Skull-stripping our data using BET","text":"

In this workshop we will first look at a very simple example of removing non-brain tissues from diffusion and T1 scans (\u201cskull-stripping\u201d) in preparation for the registration of diffusion data to MNI space.

Log into the BlueBEAR Portal and start a BlueBEAR GUI session (2 hours).

In your session, open a new terminal window and navigate to the diffusionMRI data in your MRICN folder:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI [where XXX=your ADF username]

In case you missed the previous workshop

You were instructed to copy the diffusionMRI data in the previous workshop. If you have not completed last week's workshop, you either need to find details on how to copy the data in the 'Workshop 3: Basic diffusion MRI analysis' materials or work with someone who has completed the previous workshop.

Then load FSL and FSLeyes:

module load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a\n

We will now look at how to \u201dskull-strip\u201d the T1 image (remove skull and non-brain areas); this step is needed for the registration step in both fMRI and diffusion MRI analysis pipelines.

We will do this using BET on the command line. The basic command-line version of BET is:

bet <input> <output> [options]

In this workshop we will look at a simple brain extraction i.e., performed without changing any default options.

To do this, navigate inside the p01 folder:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p01

Then in your terminal type:

bet T1.nii.gz T1_brain

Once BET has completed (should only take a few seconds at most), open FSLeyes (with & to background it). Then in FSLeyes:

This will likely show that in this case the default brain extraction was good. The reason behind such a good brain extraction with default options is a small FOV and data from a young healthy adult. This is not always the case e.g., when we have a large FOV or data from older participants.

More brain extraction to come? You BET!

In the next workshop (Workshop 5) we will explore different BET options and how to troubleshoot brain extraction.

"},{"location":"workshop4/probabilistic-tractography/#preparing-our-data-with-bedpostx","title":"Preparing our data with BEDPOSTX","text":"

BEDPOSTX is an FSL tool used for a step in the diffusion MRI analysis pipeline, which prepares the data for probabilistic tractography. BEDPOSTX (Bayesian Estimation of Diffusion Parameters Obtained using Sampling Techniques, X = modelling Crossing Fibres) estimates fibre orientation in each voxel within the brain. BEDPOSTX employs Markov Chain Monte Carlo (MCMC) sampling to reconstruct distributions of diffusion parameters at each voxel.

We will not run\u00a0it\u00a0during this workshop as it takes a long time. The data has been processed for you, and you copied it at the start of the previous workshop.

To run it, you would need to open FSL GUI, click on FDT diffusion and from drop down menu select 'BEDPOSTX'. The input directory must contain the distortion corrected diffusion file (data.nii.gz), binary brain mask (nodif_brain_mask.nii.gz) and two text files with the b-values (bvals) and gradient orientations (bvecs).

In case of the data being used for this workshop with a single b-value, we need to specify the single-shell model.

After the workshop, in your own time, you could run it using the provided data (see Tractography Exercises section at the end of workshop notes).

BEDPOSTX outputs a directory at the same level as the input directory called [inputdir].bedpostX (e.g. data.bedpostX). It contains various files (including mean fibre orientation and diffusion parameter distributions) needed to run probabilistic tractography.

As we will look at tractography in different spaces, we also need the output from registration. The concept of different image spaces has been introduced in Workshop 2. The registration step can be run from the FDT diffusion toolbox after BEDPOSTX has been run. Typically, registration will be run between three spaces:

  1. Diffusion space (nodif_brain image)
  2. Structural space (T1 image for the same participant)
  3. Standard space (the MNI152 template)

This step has been again run for you. To run it, you would need to open FSL GUI, click on 'FDT diffusion' and from the drop down menu select 'Registration'. The main structural image would be your \u201dskull-stripped\u201d T1 (T1_brain) and non-betted structural image would be T1. Plus you need to select data.bedpostX as the 'BEDPOSTX directory'.

After the workshop, you can try running it in your own time (see Tractography Exercises section at the end of workshop notes).

Registration output directory

The outputs from registration needed for probabilistic tractography are stored in the xfms subdirectory.

"},{"location":"workshop4/probabilistic-tractography/#probabilistic-tractography-using-probtrackx","title":"Probabilistic tractography using PROBTRACKX","text":"

PROBTRACKX (probabilistic tracking with crossing fibres) is an FSL tool used for probabilistic tractography. To run it, you need to open FSL GUI, click on FDT diffusion and from the drop down menu select PROBTRACKX (it should default to it).

PROBTRACKX can be used to run tractography either in diffusion or non-diffusion space (e.g., standard or structural). If running it in non-diffusion space you will need to provide a reference image. You can also run tractography from a single seed (voxel), single mask (ROI) or from multiple masks which can be specified in either diffusion or non-diffusion space.

We will look at some examples of different ways of running tractography.

First close any processes still running and open a new terminal. Next navigate inside where all the files to run tractography have been prepared for you:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/tractography/p01

As you may recall, on BlueBEAR there are different versions of FSL available. These correspond to different FSL software releases and have been compiled in a different way. The different versions of FSL are suitable for different purposes i.e., used for different MRI data analyses.

To run BEDPOSTX and PROBTRACKX, you need to use a specific version of FSL (FSL 6.0.7.6), which you can load by typing in your terminal:

module load bear-apps/2022b\nmodule load FSL/6.0.7.6\nsource $FSLDIR/etc/fslconf/fsl.sh\n

Once you have loaded FSL using these commands, open the FDT toolbox from either the FSL GUI or directly typing in your terminal:

Fdt &

We will start with tractography from a single voxel in diffusion space. Specifically, we will run it from a voxel with coordinates [47, 37, 29] located within the forceps major of the corpus callosum, a white matter fibre bundle which connects the occipital lobes.

Running tractography on another voxel

Later, you can use the FA map (dti_FA inside the p01/data folder) loaded to FSLeyes to check the location of the selected voxel, choose another voxel within a different white matter pathway, and run the tractography again.

You should have the FDT Toolbox window open as below:

From here do the following:

  1. Select data.bedpostX as the 'BEDPOSTX directory'
  2. Enter voxel coordinates [47, 37, 29] (X, Y, Z)
  3. Enter output file name 'corpus' - this we be the name of directory that contains the output files
  4. Press Go (you will see something happening in the terminal, once you see window Done!/OK, you are ready to view results. Before proceeding click OK)

After the tractography has finished, check the contents of subdirectory /corpus with the tractography output files. It should contain:

We will explore the results later. First, you will learn how to run tractography in the standard (MNI) space.

Close FDT toolbox and then open it again from the terminal to make sure you don\u2019t have anything running in the background.

We will now run tractography using a combination of masks (ROIs) in standard space to reconstruct tracts connecting the right motor thalamus (portion of the thalamus involved in motor function) with the right primary motor cortex. The ROI masks have been prepared for you and put inside the mask subdirectory ~/diffusionMRI/tractography/masks. The ROIs have been created using FSL's ATLAS tools (you\u2019ve learnt in a previous workshop how to do this) and are in standard/MNI space, thus we will run tractography in MNI (standard) space and not in diffusion space.

This is the most typical design of tractography studies.

In the FDT Toolbox window - before you select your input in the 'Data' tab - go to the 'Options' tab (as below) and reduce the number of samples to 500 under 'Options'. You would normally run 5000 (default) but reducing this number will speed up processing and is useful for exploratory analyses.

Now going back to the 'Data' tab (as below) do the following:

  1. Select data.bedpostX as 'BEDPOSTX directory'
  2. In 'Seed Space', change 'Single voxel' to 'Single mask'
  3. As 'Seed Image' choose Thalamus_motor_RH.nii.gz from the masks subdirectory
  4. Tick both 'Seed space is not diffusion' and 'nonlinear'
  5. You have to use the warp fields between standard space and diffusion space created during registration, which are inside the data.bedpost/xfms directory. Select standard2diff_warp as 'Seed to diff transform' and diff2standard_warp as 'diff to Seed transform'. These files are generated during registration.
  6. As a waypoint mask choose cortex_M1_right.nii.gz from the masks subdirectory to isolate only those tracts that reach from the motor thalamus. Use this mask also as a termination mask to avoid tracking to other parts of the brain.
  7. Enter output file name MotorThalamusM1
  8. Press Go!

Specifying masks

Without selecting the waypoint and termination masks, you would also get other tracts passing via motor thalamus, including random offshoots with low probability (noise). This is expected for probabilistic tractography, as the random sampling without specifying direction can result in spurious offshoots into nearby tracts and give low probability noise.

It will take significantly longer this time to run the tractography in standard space. However, once it has finished, you will see the window 'Done!/OK'. Before proceeding, click 'OK'.

A new subdirectory will be created with the chosen output name MotorThalamusM1. Check the contents of this subdirectory. It contains slightly different files compared to the previous tractography output. The main output, the streamline density map is called fdt_paths.nii.gz. There is also a file called waytotal that contains the total number of valid streamlines runs.

We will now explore the results from both tractography runs. First close FDT and your terminal as we need FSLeyes, which cannot be loaded together with the current version of FSL.

Next navigate inside where all the tractography results have been generated and load/open FSLeyes:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/tractography/p01\nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes &\n

We will start with our results from tractography in seed space. In FSLeyes, do the following:

  1. Load the FA map (~/diffusionMRI/tractography/p01/data/dti_FA.nii.gz) and tractography output file (~/corpus/corpus_47_37_29.nii.gz)
  2. Change the colour of tractography output to\u00a0'Red-Yellow'
  3. Change the 'Min' display thresholds to 50 to remove voxels with low probability of being in the tract. The displayed values denote a number of streamlines running through a voxel. If we use default settings, 5000 samples are generated, thus 50 represents a 1% probability, meaning that the voxels that are not shown when \"Min\" is set to 50 are voxels with a probability (of being part of the tract) of less than 1%.

Your window should look like this:

Once you have finished reviewing results of tractography in see space, close the results ('Overlay \u2192 Remove all').

We will now explore the results from our tractography ran in MNI space, but to do so we need a standard template. Assuming you have closed all previous images:

  1. Load in the MNI template (~/diffusionMRI/tractography/MNI152T1_brain.nii.gz) and tractography output file (/MotorThalamusM1/fdt_paths.nii.gz.)
  2. Change the colour of tractography output to\u00a0'Red-Yellow'
  3. You might want to add/load the ROI masks ('motor thalamus' and 'M1')
  4. Adjust the min and max display thresholds to explore the reconstructed tract. Change the Min display thresholds to 50 to remove voxels with low probability of being in the tract. There is no gold standard for thresholding tractography outputs. It will depend on study design, parameter set up and further analysis.

Tractography exercises

In your own time, you should try the exercises below to consolidate your tractography skills. If you have any problems completing or any further questions, you can ask for help during one of the upcoming workshops.

Help and further information

As always, more information on diffusion analyses in FSL, can be found on the 'diffusion' section of the FSL Wiki and this practical course ran by FMRIB (the creators of FSL).

"},{"location":"workshop4/workshop4-intro/","title":"Workshop 4 - Advanced diffusion MRI analysis","text":"

Welcome to the fourth workshop of the MRICN course!

In the previous workshop we started exploring different elements of the FMRIB's Diffusion Toolbox (FDT). This week we will continue with the different applications of the FDT toolbox and the use of Brain Extraction Tool (BET).

Overview of Workshop 4

Topics for this workshop include:

We will be working with various previously acquired datasets (similar to the data acquired during the CHBH MRI Demonstration/Site visit). We will not go into details as to why and how specific sequence parameters and specific values of the default settings have been chosen. Some values should be clear to you from the lectures or assigned on Canvas readings, please check there, or if you are still unclear, feel free to ask.

Note that for your own projects, you are very likely to want to change some of these settings/parameters depending on your study aims and design.

In this workshop we will follow basic steps in the diffusion MRI analysis pipeline, specifically with running tractography. The instructions here are specific to tools available in FSL. Other neuroimaging software packages can be used to perform similar analyses.

Example of Diffusion MRI analysis pipeline

"},{"location":"workshop5/first-level-analysis/","title":"Running the first-level fMRI analysis","text":"

We are now ready to proceed with running our fMRI analysis. We will start with the first dataset (first participant /p01) and our first step will be to skull-strip the data using BET. You should now be able by now to not only run BET but also to troubleshoot poor BET i.e., use different methods to run BET.

The p01 T1 scan was acquired with a large field-of-view (FOV) (you can check this using FSLeyes; it is generally a good practice to explore the data before the start of any analysis, especially if you were not the person who acquired the data). Therefore, we will apply an appropriate method using BET as per the example we explored earlier. This will be likely the right method to be applied to all datasets in the /recon folder but please check.

Open a terminal and use the commands below to skull-strip the T1:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/recon/p01\nmodule load FSL/6.0.5.1-foss-2021a\nmodule load FSLeyes/1.3.3-foss-2021a \nimmv T1 T1neck \nrobustfov -i T1neck -r T1 \nbet T1.nii.gz T1_brain -R \n

Remember that:

It is very important that after running BET that you examine, using FSLeyes, the quality of the brain extraction process performed on each and every T1.

A poor brain extraction will affect the registration of the functional data into MNI space giving a poorer quality of registered image. This in turn will mean that the higher-level analyses (where functional data are combined in MNI space) will be less than optimal. It will then be harder to detect small BOLD signal changes in the group.

Re-doing inaccurate BETs

Whenever the BET process is unsatisfactory you will need to go back and redo the individual BET extraction by hand, by tweaking the \u201cFractional intensity threshold\u201d and/or the advanced option parameters for the centre coordinates\" and/or the \u201cThreshold gradient\u201d.

You should be still inside the /p01 folder; please rename the fMRI scan by typing:

immv fs005a001 fmri1

"},{"location":"workshop5/first-level-analysis/#setting-up-and-running-the-first-level-fmri-analysis-using-feat","title":"Setting up and running the first-level fMRI analysis using FEAT","text":"

We are now ready to proceed with our fMRI data analysis. To do that we will need a different version of FSL installed on BlueBEAR. Close your terminal and again navigate inside the p01 folder:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/recon/p01

Now load FSL using the commands below:

module load bear-apps/2022b\nmodule load FSL/6.0.7.6\nsource $FSLDIR/etc/fslconf/fsl.sh\n

Finally, open FEAT (from the FSL GUI or by typing Feat & in a terminal window).

On the menus, make sure 'First-level analysis' and 'Full analysis' are selected. Now work through the tabs, setting and changing the values for each parameter as described below. Try to understand how these settings relate to the description of the experiment as provided at the start.

Misc Tab

Accept all the defaults.

Data Tab

Input file

The input file is the 4D fMRI data (the functional data for participant 1 should be called something like fmri1.nii.gz if you have renamed it as above). Select this using the 'Select 4D data' button. Note that when you have selected the input, 'Total volumes' should jump from 0.

Total volumes troubleshooting

If \u201cTotal volumes\u201d is still set to 0, or jumps to 1, you have done something wrong. If you get this, stop and fix the error at this point. DO NOT CARRY ON. If \u201cTotal volumes\u201d is still set to 0, that means you have not yet selected any data. Try again. If \u201cTotal volumes\u201d is set to 1, that means you have most likely selected the T1 image, not the fMRI data. Try again, but selecting the correct file.

Check carefully at this point that the total number of volumes is correct (93 volumes were collected on participants 1-2, 94 volumes on participants 3-15).

Output directory

Enter a directory name in the output directory. This needs to be something systematic that you can use for all the participants and which is comprehensible. It needs to make sense to you when you look at it again in a year or more in the future. It is important here to use full path names. It is also very important that you do not use shortened or partial path names and that you do not put any spaces in the filenames you use. If you do, these may cause some programs to crash with errors that may not seem to make much sense.

For example, use an output directory name like:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1

where:

Note that when FEAT is eventually run this will automatically create a new directory called /rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1.feat for you containing the output of this particular analysis. If the directory structure does not exist, FEAT will try and make it. You do not need to make it yourself in advance.

Repetition Time (TR)

For this experiment make sure that the TR is set to 2.0s. If FEAT can read the TR from the header information it will try and set it automatically. If not you will need to set it manually.

High pass filter cutoff

Set 'High pass filter cutoff' to 60sec (i.e. 50% greater than OFF+ON length of time).

Pre-stats

Set the following:

Stats

Set the following:

Select the 'Full model setup' option (depicted on the blue arrow above); and then on the 'EVs' tab:

On the Contrasts Tab:

Check the plot of the design that will be generated and then click on the image to dismiss it.

Post-stats

Change the 'Thresholding' pull down option to be of type 'Uncorrected' and leave the P threshold value at p<0.05.

Thresholding and processing time

Note this is not the correct thresholding that you will want at the final (third stage) of processing (where you will probably want 'Cluster thresholding') but for the convenience of the workshop, at this stage it will speed up the processing per run.

Registration

Set the following:

The model should now be set up with all the correct details and be ready to be analyzed.

Hit the GO button!

Running FSL on BlueBEAR

FSL jobs are now submitted in an automated way to a back-end high performance computing cluster on BlueBEAR for execution. Processing time for this analysis will vary but will probably be about 5 mins per run.

"},{"location":"workshop5/first-level-analysis/#monitoring-and-viewing-the-results","title":"Monitoring and viewing the resultsSeeing the effect of other parametersAnalysing other participants' data","text":"

FEAT has a built-in progress watcher, the 'FEAT Report', which you can open in a web browser.

To do that, you need to navigate inside the p01_s1.feat folder from the BlueBEAR Portal as below and from there select the report.html file, and either open it in a new tab or in a new window.

Watch the webpage for progress. Refresh the page to update and click the links (the tabs near the top of the page) to see the results when available (the 'STILL RUNNING' message will disappear when the analysis has finished).

Example FEAT Reports for processes that are still running, and which have completed.

After it has completed, first look at the webpage, click on the various links and try to understand what each part means.

Now let's use FSLeyes to look at the output in more detail. To do that you will need to open a separate terminal and load FSLeyes:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/recon/p01\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes &\n

Open the p01_s1.feat folder and select the filtered_func_data (this is the fMRI data after it has been preprocessed by motion correction etc).

Put FSLeyes into movie mode and see if you can identify areas that change in activity.

Now, add the thresh_zstat1 image and try to identify the time course of the stimulation in some of the most highly activated voxels. You should remember how to complete the above tasks from previous workshops. You can also use the \u201ccamera\u201d icon to take a snapshot of the results.

Let's have a look and see the effects that other parameters have on the data. To do this, do the following steps:

Note that each time you rerun FEAT, it creates a new folder with a '+' sign in the name. So you will have folders rather messily named p01_s1.feat, p01_s1+.feat, p01_s1++.feat, and so on. This is rather wasteful of of your precious quota space, so you should delete unnecessary ones after looking at them.

For example, if you wanted to remove all files and directories that end with '+' for participant 1:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/ \nrm -rf *+\n

You might want also to change the previous output directory name to have a more meaningful name in order to make it more obvious what parameter has changed, e.g. p01_s1_motion_off.feat.

For participant 2, you will need to repeat the main steps above:

To rerun a FEAT analysis, rather than re-entering all the model details:

Now change the input 4D file, the output directory name, and the registration details (the BET'ed reoriented T1 for participant 2), and hit 'Go'.

Design files

You can also save the design files (design.fsf) using the 'Save' button on the FEAT GUI. You can then edit this in a text editor, which is useful when running large group studies. You can also run FEAT from the command line, by giving it a design file to use e.g., feat my_saved_design.fsf. We will take a look at modifying design.fsf files directly in the Functional Connectivity workshop.

Running a first-level analysis on the remaining participants

In your own time, you should analyse the remaining participants as above.

Remember:

There are therefore 29 separate analyses that need to be done.

Analyze each of these 29 fMRI runs independently and put the output of each one into a separate, clearly labelled directory as suggested above.

Try and get all these done before the next fMRI workshop in week 10 on higher level fMRI analysis as you will need this processed data for that workshop. You have two weeks to complete this task.

Scripting your analysis

It will seem laborious to re-write and re-run 29 separate FEAT analyses; a much quicker way is by scripting our analyses using bash. If you would like, try scripting your analyses! We will learn more about bash scripting in the next workshop.

As always, help and further information is also available on the relevant section of the FSL Wiki.

"},{"location":"workshop5/preprocessing/","title":"Troubleshooting brain extraction with BET","text":"

In the first part of the workshop, you will learn the proper skull-stripping of T1 scans using FSL's Brain Extraction Tool (BET), including troubleshooting techniques for problematic cases, as well as organizing neuroimaging data files through proper naming conventions.

Background and set-up

The data that we will be using are data collected from 15 participants scanned on the same experimental protocol on the Phillips 3T scanner (our old scanner).

The stimulus protocol was a visual checkerboard reversing at 2Hz (i.e., 500ms between each reversal) and was presented alternately (20s active \u201con\u201d checkerboard, 20s grey screen \u201coff\u201d), starting and finishing with \u201coff\u201d and including 4 blocks of \u201con\u201d (i.e., off, on, off, on, off, on, off, on, off) = 180 sec.

A few extra seconds of \u201coff\u201d (6-8s) were later added at the very end of the run to match the number of volumes acquired by the scan protocol.

Normally in any experiment it is very important to keep all the protocol parameters fixed when acquiring the neuroimaging data. However, in this case we can see different parameters being used which reflect slightly different \u201cbest choices\u201d made by different operators over the yearly demonstration sessions:

Sequence order

Note that sometimes the T1 was the first scan acquired after the planning scan, sometimes it was the very last scan acquired.

Now that we know what the data is, let's start our analyses. Log in into the BlueBEAR Portal and start a BlueBEAR GUI session (2 hours). You should know how to do it by now from previous workshops.

Open a new terminal window and navigate to your MRICN project folder:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx [where XXX=your ADF username]

Please check that you are in the correct directory by typing pwd. This should return:

/rds/projects/c/chechlmy-chbh-mricn/xxx (where XXX = your login username)

You now need to create a copy of the reconstructed fMRI data to be analysed during the workshop but in your own MRICN folder. To do this, in your terminal type:

cp -r /rds/projects/c/chechlmy-chbh-mricn/module_data/recon/ .

Be patient as this might take few minutes to copy over. In the meantime, we will revisit BET and learn how to troubleshoot the often problematic process of 'skull-stripping'.

"},{"location":"workshop5/preprocessing/#skull-stripping-t1-scans-using-bet-on-the-command-line","title":"Skull-stripping T1 scans using BET on the command-line","text":"

We will now look at how to \u201dskull-strip\u201d the T1 image (remove the skull and non-brain areas), as this step is needed as part of the registration step in the fMRI analysis pipeline. We will do this using FSL's BET on the command line. As you should know from previous workshops the basic command-line version of BET is:

bet <input> <output> [options]

where:

We will firstly explore the different options and how to troubleshoot brain extraction.

If the fMRI data has finished copying over, you can use the same terminal which you have previously opened. If not, keep that terminal open and instead open a new terminal, navigating inside your MRICN project folder (i.e., /rds/projects/c/chechlmy-chbh-mricn/xxx)

Next you need to copy the data for this part of the workshop. As there is only 1 file, it will not take a long time.

Type:

cp -r /rds/projects/c/chechlmy-chbh-mricn/module_data/BET/ .

And then load FSL and FSLeyes by typing:

module load FSL/6.0.5.1-foss-2021a\nmodule load FSLeyes/1.3.3-foss-2021a\n

After this, navigate inside the copied BET folder and type:

bet T1.nii.gz T1_brain1

Open FSLeyes (fsleyes &), and when this is open, load up the T1 image, and add the T1_brain1 image. Change the colour for the T1_brain1 to Red.

This will likely show that the default brain extraction was not very good and included non-brain matter. It may also have cut into the brain and thus some part of the cortex is missing. The reason behind the poor brain extraction is a large field-of-view (FOV) (resulting in the head plus a large amount of neck present).

There are different ways to fix a poor BET output i.e., problematic \u201dskull-stripping\u201d.

First of all, you can use the -R option.

This option is used to run BET in an iterative fashion which allows it to better determine the centre of the brain itself.

In your terminal type:

bet T1.nii.gz T1_brain2 -R

Running BET recursively from the BET GUI

Instead of using the bet -R command from the terminal, you can also use the BET GUI. To run it this way, you would need to select the processing option \u201cRobust brain centre estimation (iterates bet2 several times)\u201d from the pull down menu.

You will find that running BET with -R option takes longer than before because of the extra iterations. Reload the newly extracted brain (T1_brain2) into FSLeyes and check that the extraction now looks improved.

In the case of T1 images with a large FOV, you can first crop the image (to remove portion of the neck) and run BET again. To do that you need to use command robustfov before applying BET. But first rename the original image.

Type in your terminal:

immv T1 T1neck` \nrobustfov -i T1neck -r T1 \nbet T1.nii.gz T1_brain3 -R \n

Reload the newly extracted brain (T1_brain3) into FSLeyes and compare it to T1_brain1 and to check that the extraction looks improved. Also compare the cropped T1 image to the original one with a large FOV (T1neck).

Another option is to leave the large FOV and to manually set the initial centre by hand via the -c option on the command line. To do that you need to firstly examine the T1 scan in FSLeyes to get a rough estimation (in voxels) of where the centre of the brain is.

There is another BET option which can improve \u201dskull stripping\u201d: the fractional intensity threshold, which by default is set to 0.5. You can change it from any value between 0-1. Smaller values give larger brain outline estimates (and vice versa). Thus, you can make it smaller if you think that too much brain tissue has been removed.

To use it, you would need to use the -f option (e.g., bet T1.nii.gz T1_brain -f 0.3).

Changing the fractional intensity

In your own time (after the workshop) you can check the effect of changing the fractional intensity threshold to 0.1 and 0.9 (however make sure you name the outputs accordingly, so you know which one is which).

It is very important that after running BET you always examine (using FSLeyes) the quality of the brain extraction process performed on each and every T1.

The strategy you might need to use could be different for participants in the same study. You might need to try different options. The general recommendation is to combine the cropping (if needed) and the -R option. However, it may not work for all T1 scans, some types of T1 scans work better with one strategy than with another. Therefore, it is good to always try a range of options.

Now you should be able to \u201cskull-strip\u201d T1 scans as needed for fMRI analyses!

"},{"location":"workshop5/preprocessing/#exploring-the-data-and-renaming-the-mri-scan-files","title":"Exploring the data and renaming the MRI scan files","text":"

By now you should have a copy of the reconstructed fMRI data in your own folder. As described above, the /recon version of the directory contains the MRI data from 15 participants acquired over several years from various site visits.

The datasets have been reconstructed into the NIFTI format. The T1 images in each directory are named T1.nii.gz. The first (planning) scan sequences (localisers) have been removed in each directory as these will not be needed for any subsequent analysis we are doing.

Navigate inside the recon folder and list the contents of these directories (using the ls command) to make sure they actually contain imaging files. Note that all the imaging data here should be in NIFTI format.

You should see the set of participant directories labelled p01, p02 etc., all the way up to the final directory,p15.

The directory structure should look like this:

~/chechlmy-chbh-mricn/xxx/recon/\n                              \u251c\u2500\u2500 p01/\n                              \u251c\u2500\u2500 p02/\n                              \u251c\u2500\u2500 p03/\n                              \u251c\u2500\u2500 p04/\n                              \u251c\u2500\u2500 p05/\n                              \u251c\u2500\u2500 ...\n                              \u251c\u2500\u2500 p13/\n                              \u251c\u2500\u2500 p14/\n                              \u2514\u2500\u2500 p15/\n

Verifying the data structure

Please verify that you have this directory structure before proceeding!

Explore what\u2019s inside each participant folder. Please note that each participant folder only contains reconstructed data. It\u2019s a good idea to store raw and reconstructed data separately. At this point you should have access to reconstructed participants p01 to p15. The reconstructed data should be in folders named ~/chechlmy-chbh-mricn/xxx/recon/p01 etc.

However, apart from the T1 images that have been already renamed for you, the other reconstructed files in this directory will have unusual names, created automatically by the dcm2nii conversion program.

You can see this by typing into your terminal:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/recon/p03 \nls\n

Which should list:

fs004a001.nii.gz\nfs005a001.nii.gz\nT1.nii.gz\n

It is poor practice to keep with these names as they do not reflect the actual experiment and will likely be a source of confusion later on. We should therefore rename the files to be something meaningful. For this participant (p03) the first fMRI scan is file 1 (fs004001.nii.gz) and the second fMRI scan is file 2 (fs005a001.nii.gz). Rename the files as follows (to do that you need to be inside folder p03):

immv fs004a001 fmri1 \nimmv fs005a001 fmri2\n

Renaming files

The immv command is a special FSL Linux command that works just like the standard Linux mv command except that it automatically takes care of the filename extensions. It saves from having to write out: mv fs004a001.nii.gz fmri1.nii.gz which would be the standard Linux command to rename a file.

You can of course name these files to anything you want. In principle, you could call the fMRI scan run1 or fmri_run1 or epi1 or whatever. The important thing is that you need to be extremely consistent in the naming of files for the different participants.

For this workshop we will use the naming convention above and call the files fmri1.nii.gz and fmri2.nii.gz.

As the experimenter you would normally be familiar with the order of acquisition of the different sequences and therefore the order of the resulting files created, including which one is the T1 image. You would write these down in your research log book whilst acquiring the MRI data. But sometimes, as here, if data is given to you later it may not be clear which particular file is the T1 image.

There are several ways to figure this out:

  1. The very first file will always (unless it has been deleted before it got to you) be a planning scan (localizer). This can be ignored. In general, the T1 image is very likely to be either the second file or the very last file.
  2. If you look at the list of file sizes (using ls -al) you should be able to see that the T1 image is smaller than most typical EPI fMRI images. Also, if there are more than one fMRI sequences (as here with p03 onwards) you will also see that several files have the same file size and the odd one out is the T1.
  3. If you load the images into FSLeyes and look at them individually it should be very obvious which image is the T1. Remember the T1 image is a single volume, in high spatial resolution. It will also likely have a much larger field of view (showing all of the skull and part of the spine). The fMRI images will consist of many volumes (click through several volumes to check), be of lower spatial resolution (it will look coarser) and have a more limited field of view.

If you have access to the NIFTI format files (.nii.gz as we have here) then you can use one of the FSL command line tools (in a terminal window) called fslinfo to examine the protocol information on the file. This will show you the number of volumes in the acquisition (remember this is 1 volume for a T1 image) as well as other information about the number of voxels and the voxel size.

Together this information is sufficient to work out which file is the T1 and which are the fMRI sequence(s).

For example if you type the following in your terminal:

cd ..\ncd p08 \nfslinfo fs005a001.nii.gz\n

You should see something like the image below:

Before proceeding to the next section on running a first-level fMRI analysis, close your terminal.

"},{"location":"workshop5/workshop5-intro/","title":"Workshop 5 - First-level fMRI analysis","text":"

Welcome to the fifth workshop of the MRICN course!

The module lectures provide a basic introduction to fMRI concepts and the theory behind fMRI analysis, including the physiological basis of the BOLD response, fMRI paradigm design, pre-processing and single subject model-based analysis.

In this workshop you will learn how to analyse fMRI data for individual subjects (i.e., at the first level). This includes running all pre-processing stages and the first level fMRI analysis itself. The aim of this workshop is to introduce you to some of the core FSL tools used in the analysis of fMRI data and to gain practical experience with analyzing real fMRI data.

Specifically, we will explore FEAT (FMRI Expert Analysis Tool, part of FSL) to walk you through basic steps in first level fMRI analysis. We will also revisit the use of Brain Extraction Tool (BET), and learn how to troubleshoot problematic \u201dskull-stripping\u201d for certain cases.

Overview of Workshop 5

Topics for this workshop include:

We will not go into details as to why and how specific values of the default settings have been chosen. Some values should be clear to you from the lectures or resource list readings, please check there or if you are still unclear feel free to ask. We will explore some general examples. Note that for your own projects you are very likely to want to change some of these settings/parameters depending on your study aims and design.

"},{"location":"workshop6/running-containers/","title":"Scripting analyses, submitting jobs on the cluster, and running containers","text":""},{"location":"workshop6/running-containers/#introduction-to-scripting","title":"Introduction to scripting","text":"

Script files are text files that contain Linux command-line instructions that are equivalent to typing a series of commands in the terminal or the equivalent using the software GUI (e.g., FSL GUI). By scripting, it is possible to automate most of the FSL processing, including both diffusion MRI and fMRI analysis. In the previous workshop you were learning how to set up a first level fMRI model for a single experimental run for one participant. Subsequently, in your own time, you were asked to repeat that process for other participants (15 participants in total), some participants with 2 or 3 experimental runs.

The notes below provide a basic introduction to Linux (bash) scripting as well as some guidelines and examples on how to automate the first-level analysis as you might want to do when completing the Data Analysis assignment.

To do this, we have provided some basic info and script examples. You can use the examples when analysing the remaining participants' data (the task you were given to complete at the end of the previous workshop) if you have not done it already. If you have already done so, you can either repeat that process by scripting, or apply it to your assessment. But you can also complete all these tasks without scripting.

All the example scripts shown below are in the folder:

/rds/projects/c/chechlmy-chbh-mricn/module_data/scripts\n

To start please copy the entire folder into your module directory. Please note that to run some of the scripts as per examples below, you will need to load FSL, something you should know how to do.

A script can be very simple, containing just commands that you already know how to use, with each command put on a separate line. To create a script for automating FSL analysis, the most widely used language is bash (shell). To write a bash script you need a plain text editor (e.g., vim, nano). If you are not familiar with using a text editor in Linux terminal, there is a simple way of creating and/or editing scripts using the BlueBEAR portal.

You can start a new script by clicking on \u201cNew File\u201d and naming it for example \u201cmy_script.sh\u201d and next clicking on \u201cEdit\u201d to start typing commands you want to use. You can also use \u201cEdit\u201d to edit existing scripts.

The shebang

The first line of every bash script should be #!/bin/bash. This is the 'shebang' or 'hashbang' line. It tells the system which interpreter should be used to execute the script.

Suppose we want to create a very simple script repeatedly using one of the FSL command line tools. Let's say that we want to find out what is the volume for each of the T1 brains in our experiment. The FSL command that will tell us this is fslstats, together with the -V option which lists the number and volume of non-empty voxels.

To create this script, you would type the text below as in the provided brainvols.sh script example. To view it, select this script and click on 'Edit'. Alternatively, you can start a new file and copy the commands as shown below. (In the actual script you would need to replace xxx with the name of your own directory).

#!/bin/bash\n\ncd rds/projects/c/chechlmy-chbh-mricn/xxx/recon\nfslstats p01/T1_brain -V\nfslstats p02/T1_brain -V\nfslstats p03/T1_brain -V\nfslstats p04/T1_brain -V\nfslstats p05/T1_brain -V\nfslstats p06/T1_brain -V\nfslstats p07/T1_brain -V\nfslstats p08/T1_brain -V\nfslstats p09/T1_brain -V\nfslstats p10/T1_brain -V\nfslstats p11/T1_brain -V\nfslstats p12/T1_brain -V\nfslstats p13/T1_brain -V\nfslstats p14/T1_brain -V\nfslstats p15/T1_brain -V\n

Whether you are editing or creating a new script, you need to save it. After saving, exit the editor.

Next you need to make the script executable (as below) and remember the script will run in the current directory (pwd). You also need to make the script executable if you copied a script from someone else.

To make your script executable type in your terminal: chmod a+x brainvols.sh

Running the script without permissions

If you try to run the script without making it executable, you will get a permission error.

To run the script, type in your terminal: ./brainvols.sh

You can now tell which participant has the biggest brain.

The previous script hopefully worked. But it is not very elegant and is not much of an improvement over typing the commands one at a time in a terminal window. However, the bash scripting language that we are using provides an extra layer of simple program commands that we can use in combination with the FSL commands we want to run. In this way we can make our scripts more efficient:

Bash has a for/do/done construct to do the former and an echo command to do the latter. So, let's use these to create an improved script with a loop. This is illustrated in the example brainvols2.sh:

#!/bin/bash\n\ncd rds/projects/c/chechlmy-chbh-mricn/xxx/recon\nfor p in p01 p02 p03 p04 p05 p06 p07 p08 p09 p10 p11 p12 p13 p14 p15\ndo\n    echo -n \"Participant ${p}: \"\n    fslstats ${p}/T1_brain -V\ndone\n

Both examples above assume that you have already run BET (brain extraction) on T1 scans. But of course, you could also automate the process of brain extraction and complete both tasks, i.e., running bet and calculate volume, using a single script. This is illustrated in the example bet_brainvols.sh:

#!/bin/bash\n\n# navigate to the folder containing the T1 scans\ncd rds/projects/c/chechlmy-chbh-mricn/xxx/recon\n\n# a for loop over participant data files\nfor participant_num in p01 p02 p03 p04 p05 p06 p07 p08 p09 p10 p11 p12 p13 p14 p15\n\n# do the following ...\ndo\n    # show the participant number \n    printf \"Participant ${participant_num}: \\n\"\n\n    # delete the old extracted brain image (we can see this\n    # happen in real time in the file explorer)\n    rm ${participant_num}/T1_brain.nii.gz\n\n    # extract the brain from the T1 image\n    bet ${participant_num}/T1.nii.gz ${participant_num}/T1_brain\n\n    # list the number of non-empty brain voxels\n    fslstats ${participant_num}/T1_brain -V\n# end the loop\ndone\n\n# ============================================================\n# END OF SCRIPT\n

Some of the most powerful scripting comes when manipulating FEAT model files. When you create a design for a first level fMRI analysis in the FEAT GUI and press the 'Go' button, FEAT writes out the model analysis file into the output directory. The name for this saved file is design.fsf. Once you have created one of these files, you can load it back into FEAT and modify only the parts that are different between the different analyses and then resave it e.g., change parameters or change it for another participant (see workshop materials covering first level of fMRI analysis).

Alternatively, since design.fsf is a text file it can also be opened (and edited) in a text editor. Because the experiment \u2013 and therefore the model design \u2013 is almost the same for all participants, there is very little difference in the design.fsf files between the level one analyses for different participants. In fact, if following the directory structure naming convention suggested in the workshop, the only thing that changes for a particular run is the identifier of the participant.

So, if we copy the design file for p01's first scan (i.e. the file feat/1/p01_s1.feat/design.fsf), open it up in a text editor, search and replace every instance of p01 with p02 and then save it, we should have the model file for p02's first scan. The only differences should be in:

In general, all the model files will only differ by the participant identifiers (p01-p15) the identifiers we've used for the particular scan number (s1, s2 and s3 for output directories, and fmri1, fmri2 and fmri3 in the input EPI file names).

The special cases are the scans for the first two participants. These scans were only 93 volumes long, whereas all the rest of the scans following this are 94 volumes long. However, given the above information and the model file for participant 1, scan 1, we can now create a script that will generate all other model files.

Firstly, let's create a new directory (models) where you will keep your model files. Navigate to your folder (/rds/projects/c/chechlmy-chbh-mricn/xxx/) and type:

mkdir models

Now copy the script create_alldesignmodels.sh into that folder. The script contains the following code:

#!/bin/bash\n\n# Copy over the saved model file for p01 scan 1\ncp /rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1.feat/design.fsf p01_s1.fsf\n\n# Create model file for p02 scan1\ncp p01_s1.fsf p02_s1.fsf\nperl -i -p -e 's/p01/p02/' p02_s1.fsf\n\n# Create model files for p03-p15 scan 1\nfor p in p03 p04 p05 p06 p07 p08 p09 p10 p11 p12 p13 p14 p15\ndo\n    cp p01_s1.fsf ${p}_s1.fsf\n    perl -i -p -e \"s/p01/${p}/\" ${p}_s1.fsf\n    perl -i -p -e 's/93/94/' ${p}_s1.fsf\ndone\n\n# Create model files for p03-p15 scan 2\nfor p in p03 p04 p05 p06 p07 p08 p09 p10 p11 p12 p13 p14 p15\ndo\n    cp ${p}_s1.fsf ${p}_s2.fsf\n    perl -i -p -e 's/_s1/_s2/' ${p}_s2.fsf\n    perl -i -p -e 's/fmri1/fmri2/' ${p}_s2.fsf\ndone\n\n# Create model files for p05 scan 3\ncp p05_s1.fsf p05_s3.fsf\nperl -i -p -e 's/_s1/_s3/' p05_s3.fsf\nperl -i -p -e 's/fmri1/fmri3/' p05_s3.fsf\n

Edit (hint: replace the xxx), and save the file, make it executable, and then run it.

If it has worked you should now have a directory full of model files. Each of them can be run from the command line with a command such as feat p01_s1.fsf or with a script (you should be able to create such script using the earlier example).

FSL's scripting tutorial

You can find more information and other examples on FSL's scripting tutorial webpage.

"},{"location":"workshop6/running-containers/#submitting-jobs-to-the-cluster","title":"Submitting jobs to the cluster","text":"

The first part of this workshop introduced you to running bash scripts using the terminal in the BlueBEAR GUI. However, in addition to running bash scripts in this way, you can also create scripts and run analysis jobs directly on the cluster with Slurm (BlueBEAR's high performance computing (HPC) scheduling system).

What is Slurm?

Understanding how Slurm works is beyond the scope of this course, and is not strictly necessary, but you can find out more by reading the official Slurm documentation.

In previous workshops we were using BlueBEAR Portal to launch BlueBEAR GUI Linux desktop and from there the built-in terminal. As mentioned in workshop 1, you can also use BlueBEAR Portal to jump directly on BlueBEAR terminal, to access one of the available login nodes and from there, run analysis jobs.

While the BEAR Portal provides a convenient web-based access to a range of BlueBEAR services, you don\u2019t have to go via this portal, but can instead use the command line to access BlueBEAR through one of the multiple login nodes, available from the address bluebear.bham.ac.uk.

Exactly how you do that will depend on the type of the operating system your computer uses; you can find detailed information about accessing BlueBEAR using the command line from this link.

The process of submitting and running jobs on the cluster is exactly the same whether using the BlueBEAR terminal via the \u201cClusters\u201d tab on the BlueBEAR portal or using the command line. To run a job with Slurm (BlueBEAR HPC scheduling system) you first need to prepare a job script and then submit using the command sbatch.

In the first part of this workshop you have learned how to create bash scripts to automate FSL analyses; in order to turn these scripts into the job script for Slurm, you need to add a few additional command lines. This is illustrated in the example below: bet_brainvols_job.sh

#!/bin/bash\n\n#SBATCH --qos=bbdefault\n#SBATCH --time=60\n#SBATCH --ntasks=5\n\nmodule purge; module load bluebear\nmodule load FSL/6.0.5.1-foss-2021a\n\nset -e\n\n# navigate to the folder containing the T1 scans\ncd rds/projects/c/chechlmy-chbh-mricn/xxx/recon\n\n# a for loop over participant data files\nfor participant_num in p01 p02 p03 p04 p05 p06 p07 p08 p09 p10 p11 p12 p13 p14 p15\ndo\n    # do the following...\n\n    # show the participant number\n    printf \"Participant ${participant_num}: \\n\"\n\n    # delete the old extracted brain image (we can see this\n    # happen in real time in the file explorer)\n    rm ${participant_num}/T1_brain.nii.gz\n\n    # extract the brain from the T1 image\n    bet ${participant_num}/T1.nii.gz ${participant_num}/T1_brain\n\n    # list the number of non-empty brain voxels\n    fslstats ${participant_num}/T1_brain -V\n\n# end the loop\ndone\n\n# ============================================================\n# END OF SCRIPT\n

This is a modified version of the bet_brainvols.sh script. If you compare the two, you will notice that after the #!/bin/bash line and before the start of the script loop, a few new lines have been added. These define the BlueBEAR resources required to complete the analysis job.

These are explained below:

The task of setting up resources is not always straightforward and will often require several test and error trials as if you do not request sufficient resources and time, your job might fail and if you request too many resources, it might be either rejected or be put in a long queue till require resources become available.

You can find detailed guidelines re specifying required resources in the BlueBEAR documentation.

The script above can be run on the cluster using the BlueBEAR terminal or the command line. To do the latter, you need to use the sbatch command which submits your job to the BlueBEAR scheduling system based on the requested resources. Once submitted it will run on the first available node(s) providing the resources you requested in your script.

For example, to submit your BET job as in the example script above, in the BlueBEAR terminal you would type:

sbatch bet_brainvols_job.sh

The system will return a job number, for example:

Submitted batch job XXXXXX

You need this number to monitor or cancel your job.

To monitor your job, you can use the squeue command by typing in the terminal:

squeue -j XXXXXX

This is a command for viewing the status of your jobs. It will display information including the job\u2019s ID and name, the user that submitted the job, time elapsed and the number of nodes being used.

To cancel a queued or running job, you can use the scancel command by typing in the terminal:

scancel XXXXXX

"},{"location":"workshop6/running-containers/#containers","title":"Containers","text":"

In previous workshops we have been using different pre-installed versions of FSL through different modules available on BEAR apps. Sometimes however, you might need a different (older or newer) version of FSL or a differently pre-compiled FSL. While you can request an up-to-date version of FSL - following the new release of the software it is added to BEAR apps (although it might take a while) - you cannot request to change how FSL is compiled on BEAR apps, as it would affect other BlueBEAR users or might not even be possible due to the BlueBEAR set up.

Instead, you can install FSL within a controlled container and use this contained version instead of what\u2019s available on BEAR apps.

BlueBEAR supports containerisation using Apptainer. Each BlueBEAR node has Apptainer installed, which means that the apptainer command is available without needing to first load a module. Apptainer can download images from any openly available container repository, for example Docker Hub or Neurodesk. Such sites will provide information re available software, software version and how to download a specific container.

Downloading containers

Please do try to download any containers in this workshop!

In the folder scripts, which you copied at the start of this workshop, you will find the subdirectory containers with two FSL containers (two different versions of FSL) downloaded from Neurodesk:

/rds/projects/c/chechlmy-chbh-mricn/module_data/scripts/containers

The simplest way to use a container would be to load it and then use specific commands to run various FSL tools e.g., bet.

For example, you would type in your terminal:

apptainer shell fsl_6.0.7.4_20231005.sing\nbet T1.nii.gz T1_brain.nii.gz\n

You could also use a container in your job script to replace the BEAR apps version of FSL with the FSL container. To do that, you would need to add the line below to your script:

apptainer exec [name of the container]

Below is a very simple example of such a script, example_job_fslcontainer.sh which you can find inside the subdirectory containers:

#!/bin/bash\n#SBATCH --qos=bbdefault\n#SBATCH --time=30\n#SBATCH --ntasks=5\n\nmodule purge; module load bluebear\n\nset -e\n\napptainer exec fsl_6.0.7.4_20231005.sing bet T1.nii.gz T1_brain.nii.gz\n
"},{"location":"workshop6/workshop6-intro/","title":"Workshop 6 - Scripts, containers and running analyses on the academic computing cluster","text":"

Welcome to the sixth workshop of the MRICN course!

Prior workshops introduced you to running MRI analysis with various FSL tools by either using the FSL GUI or typing a simple command in the terminal. In this workshop we will look at how to automate FSL analyses by creating scripts. Subsequently, we will explore how to run FSL scripts more efficiently by submitting jobs to the cluster. The final part of this workshop will introduce how to use FSL containers rather than pre-installed versions of FSL using different modules available on BEAR apps.

Overview of Workshop 6

Topics for this workshop include:

More information

The BEAR Technical Docs provides guidance on submitting jobs to the cluster.

"},{"location":"workshop7/advanced-fmri-tools/","title":"Advanced fMRI analysis tools","text":"

The materials included in this worksheet follows on from the previous two FSL-specific fMRI workshops: in the first workshop you ran first level fMRI analysis, whilst in the second workshop you combined the data across scans within each participant (second level analysis) and across the group (third level analysis) in a couple of different ways.

The information included in this page covers:

As in the other materials we will not discuss in detail why you might choose certain parameters. The aim is to familiarise you with some of the available analysis tools. This worksheet will take you, step by step, through these analyses using the FSL tools. You are encouraged to read the pop-up help throughout (hold your mouse arrow over the FSL gui buttons and menus), refer to your lecture notes lectures or resource list readings. You can also find more information on the FSL website.

If you have correctly followed the instructions from the previous workshops, you should by now have 29 first-level FEAT directories for the analysis of each scan of each participant:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/

e.g. /rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1.feat

(1 feat analysis directory for participant 1, 1 for participant 2, 3 for participant 5, and 2 for everyone else).

At the second-level you should have 13 simple within-subjects fixed-effects directories:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/

e.g. /rds/projects/c/chechlmy-chbh-mricn/xxx/feat/ 2/p03.gfeat

(one each for participants 3-15)

You should also have a directory:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/level2all.gfeat (for the all-participants-all-runs second level model)

Finally, you should have 2 third-level group analysis folders:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/3/GroupAnalysis1.gfeat

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/3/GroupAnalysis2.gfeat

(containing the third level group analyses corresponding to the two different ways of combining data at third level)

"},{"location":"workshop7/advanced-fmri-tools/#testing-the-effects-of-different-thresholds","title":"Testing the effects of different thresholds","text":"

Try as many of the suggestions below as you have time for. Try 1 or 2 at the start of the session and returning to these later if you have time, or outside the workshop. (You can load up an existing model by running FEAT and using the 'Load' button. The model file will be called design.fsf and can be found in the .feat or .gfeat directory of an existing analysis folder.)

  1. Ordinary Least Squares (OLS) vs FLAME: Repeat the third level group analyses from FSL Workshop 5, but on the 'Stats' tab select 'Mixed Effects: Simple OLS'

  2. Different correction for multiple comparisons: Repeat the third level group analyses from FSL Workshop 5, but on the 'Post-Stats' tab for 'Thresholding', use the pull down menu to select 'Voxel correction'.

  3. Different thresholds and correction for multiple comparisons: Repeat the third level group analyses from FSL Workshop 5, but on the 'Post-Stats' tab use 'Cluster Thresholding' but choose a different z-threshold.

Examining the results

Look at the results from each method of correction. Use both the webpage output and FSLeyes to look at your data. Find the regions of significant activation. Try looking at a time series (see the 'Data Visualisation' workshop notes for help).

"},{"location":"workshop7/advanced-fmri-tools/#t-test-vs-f-test","title":"T-test vs F-test","text":"

Start up a FEAT GUI, by opening a new terminal window and typing fsl &. Click on the 'Feat' button [or type Feat & in the terminal window to directly open the FEAT GUI].

Load up one of the third level analyses run in the last workshop (e.g., 'GroupAnalysis1' or 'GroupAnalysis2').

Now follow the instructions below:

Data Tab

Stats Tab

Differences between t-test and F-test images

Once it has run inspect the resulting output in a browser. How does the rendered stats image for the F-test (the zfstat image) differ from the t-stat image? Why are they different?

"},{"location":"workshop7/advanced-fmri-tools/#extracting-information-from-regions-of-interest-rois-using-featquery","title":"Extracting information from Regions of Interest (ROIs) using FEATquery","text":"

FEATquery is an FSL tool which allows you to explore FEAT results by extracting information from regions of interests within specific (MNI) coordinates or using a mask.

In the examples below we will get basic stats from two pre-prepared Regions of Interest (ROIs) using the first level models that you have run. You can also make your own ROIs using FSLeyes (you should remember how to create ROI masks from previous workshops).

To start FEATquery, you need to load FSL (see previous workshops), and either - in a terminal - type Featquery & or on the FSL GUI click the button on the bottom right labelled 'Misc' and select the menu option 'Featquery'.

In any case, when FEATQuery is open, following the instructions below:

Input FEAT directories

Stats images of interest

Once you have done that the GUI interface will update and you will see a list of possible statistics.

Input ROI selection

For the 'Mask image' entry select either one of the prepared masks.

Either:

/rds/projects/c/chechlmy-chbh-mricn/module_data/masks/V1.nii.gz

or

/rds/projects/c/chechlmy-chbh-mricn/module_data/masks/Parietal.nii.gz

Output options

When ready, click the 'Go' button!

Examining the FEATquery output

Inspect the results by opening report.html inside the 'V1' folder. Do they make sense?

"},{"location":"workshop7/higher-level-analysis/","title":"Running the higher-level fMRI analysis","text":"

If you have correctly followed the instructions from the previous workshop, you should now have 29 FEAT directories arising from each fMRI scan of each participant, e.g.,

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1.feat  \u2190 Participant 1 scan 1\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p02_s1.feat  \u2190 Participant 2 scan 1\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p03_s1.feat  \u2190 Participant 3 scan 1\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p03_s2.feat  \u2190 Participant 3 scan 2\n(\u2026)\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p15_s2.feat  \u2190 Participant 15 scan 2\n

(where XXX = your particular login (ADF) username).

For participants 1 and 2 you should have only one FEAT directory. For participants 3-4 and 6-15 you should have 2 FEAT directories. For participant 5 you should have 3 FEAT directories. You should therefore have 29 complete first level FEAT directories.

If you haven\u2019t done so already, please check that the output of each and all of these first level analyses looks ok either through the FEAT Report or through FSLeyes. If you would like to use the FEAT Report, select the report (called report.html) from within each FEAT directory from your analysis, e.g.,:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p03_s1.feat/report.html

and either open it in a new tab or in a new window.

Selecting a participant's first-level FEAT output (left) and examining the FEAT Report (right).

Check all of the reports and click on each of the sub-pages in turn:

  1. Check that the analysis ran ok by checking the 'Log' page.
  2. Check that the 'Registration' looks ok.
  3. Check how much they moved on the 'Pre-Stats' page.
  4. Have a quick look at the 'Post-Stats' page.

Motion correction

Participant 5 was scanned three times. In one of these scans they moved suddenly. Use the motion correction results to decide which scan this was. We will ignore this scan for the rest of this workshop. If you have not completed this stage of the analysis, you should do this now before continuing on. Refer to the worksheet from previous workshops.

"},{"location":"workshop7/higher-level-analysis/#second-level-analysis-averaging-across-runs-within-participants","title":"Second-level analysis - averaging across runs within participants","text":"

Most of our participants did the experiment twice \u2013 a repeated measurement. How do we model this data? There are different ways we can do this.

The simplest way is to combine the data within participant before generating group statistics across the entire group of participants. This corresponds with what you might do if you are analysing data as you go along. Here, we will average their data over the two fMRI runs so that we can benefit from the extra power. Participant 5 did the experiment 3 times. For the moment, for this participant, we will choose only the two scans where they moved less for further analysis.

Choose one of the participants that did the experiment twice (not participant 1 or 2), such as participant 3. Open a terminal and load FSL:

module load bear-apps/2022b\nmodule load FSL/6.0.7.6\nsource $FSLDIR/etc/fslconf/fsl.sh\n

Then navigate to the folder where your first-level directories are located and open FEAT:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1\nFeat &\n

At the top left of the GUI you will see a pull down menu labelled 'First-level analysis'. Click here to pull down this menu and choose 'Higher-level analysis' The options available will change.

Now fill out the tabs as below:

Stats

It is necessary to select this now in order to reduce the number of inputs on the 'Data' tab to be only 2 (the default for all other higher level model types is a minimum no of 3 inputs). Note that choosing 'Fixed effects' will ignore cross scan variance, which is fine to do here because these are scans from the same person at the same time.

Data

Once you have selected the FEAT directories for input, the GUI will update to show what COPE files (contrasts) are available for analysis.

The naming scheme here (as with your raw data directories, reconstruction directories and your first level analysis directories) needs to be clear and logical.

It is therefore sensible to use a directory structure like:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2 (directory for all FEAT 2nd level analyses)

For example, you can then put the second level analysis for participant 3 in the subdirectory and so on.

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/p03

Stats (again)

Click the 'Done' button. This will produce a schematic of your design. Check it makes sense and close its window.

Post-Stats

As with the first-level analysis, select the 'Thresholding' pull down option and type 'Uncorrected' and leave the P-threshold value at p<0.05.

Thresholding and processing time

Note this is not the correct thresholding that you will want at the final (third stage) of processing (where you will probably want Cluster thresholding) but at the first and second level stages it will speed up the processing per person.

Click the 'Go' button. Repeat for the other participants.

In the web browser look at the results in the FEAT report (i.e., by opening the report.html). Note that the output folder is called p03.gfeat. You will have to click on the link marked 'Results' and then on the link labelled 'Lower-level contrast 1 (vision)' on that page and then on the 'Post-stats' link.

Comparing second-level and first-level results

Are the results better than for just one scan from p03?

"},{"location":"workshop7/higher-level-analysis/#third-level-analysis-combining-data-across-participants-from-the-second-level","title":"Third-level analysis - combining data across participants from the second level","text":"

In FSL, the procedure for setting up an analysis across participants is very similar to averaging within a participant. The main difference is that we specifically need to set up the analysis to model the inter-subject variance. This allows us to generalise beyond our specific group of participants and to interpret the results as being typical of the wider population.

In this demonstration experiment, 12 participants did the scan twice, 1 was scanned three times, and 2 did the scan only once. (Note that it should be rather obvious that this is not an ideal design for a real experiment). In our case, we have averaged within participants and now we will combine these second level analyses with the first level analyses from those participants who were only scanned once.

Close FEAT if you still have it open. Then open it again by typing Feat &.

Don't close the terminal if you don't have to!

Please note that if you close the terminal here you will first need to load FSL again and navigate back to your folder!

At the top left of the GUI select the pull down menu labelled 'First-level analysis\u2019 and choose 'Higher-level analysis'.

Now complete each of the tabs as described below:

Data

We have 13 participants who did the experiment at least twice (who we combined via a second-level analysis) and 2 who did the experiment only once (who we only analysed at first-level). Therefore, for this next (third level) analysis we will need to combine over first and second level analyses.

In the dialogue that appears you need to add in the path to the .feat directory for each person. For the first-level analyses this is something like:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1.feat\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p02_s1.feat\n

For the participants where you did a second level analysis this will be:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/p03.gfeat/cope1.feat\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/p04.gfeat/cope1.feat\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/p05.gfeat/cope1.feat\n...\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/p15.gfeat/cope1.feat\n

Subdirectory location

Note that the actual feat subdirectories of interest for the second-level analyses are hidden inside the .gfeat directories.

You should now enter a name for the output analysis directory. Use the 'Output' directory entry to choose a directory to put the results in. As with your second-level analyses, the naming scheme here needs to be clear and logical.

It is therefore sensible to use a structure like:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/3 (directory for all FEAT 3rd level analyses)

For example, you can then put this current 3rd level output in the subdirectory:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/3/GroupAnalysis1

Stats

Post-Stats

Accept the defaults.

Now click the 'Go' button!

When the group analysis has finished, check through the output (using the FEAT Report) and try to work out what each page means. For example, the 'Registration Summary' page will highlight areas where the registrations are not aligned between scans/participants. A few missing voxels are ok, but any more than that is a problem as you won't get results from areas where there are missing data.

"},{"location":"workshop7/higher-level-analysis/#second-level-analysis-the-all-in-one-method","title":"Second-level analysis - the 'all-in-one' method","text":"

A more complicated modelling approach at the second-level is to use one single second-level model (instead of separate models per participant) which incorporates all of the information available about participants and runs.

This corresponds with what you might do if you are analysing all the data after it has been collected, all in one go. Depending on the design of the particular experiment, this has the potential to be an improved approach as it allows a better estimate of both the between-run and the between-subject variance.

Close FEAT if you still have it open. Then open it again by clicking on the FEAT button in the FSL GUI (or type Feat & in the terminal window to directly open the FEAT GUI).

Now complete the tabs following the instructions below:

Data

(Note that there are only 28 inputs here as we are going to use all of the first level feat dirs as inputs except for the worst run of the three that participant 5 did).

As always you should now enter a name for the output analysis directory. Use the 'Output' directory entry to choose a directory to put the results in.

As this is a second-level model it should go under the directory:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2 (directory for all FEAT 2nd level analyses)

and should be meaningfully named. For example, you could call it:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2level2all

Stats

Check boxes in FSL

In the older versions of FSL after selecting an option you will see a yellow checkbox, however in the newer versions of FSL such as the one we are using, the checkbox is yellow to start with, and after selecting option you will see a tick \u2714\ufe0f inside the yellow checkbox.

Check it makes sense, and that you understand what it is showing, then close its window.

Post-Stats

Wait for the analysis to complete and then look at the results. Note that the output folder is called level2all.gfeat if you named it as above. You will have to click on the link marked 'Results' and then on the link labelled 'Lower-level contrast 1 (vision)' on that page and then on the 'Post-stats' link. This will then show you a contrast (rendered stats image) for each of the participants.

Comparing the second-level results

Are the results from this bigger model better than the simple fixed effects model for the same participant? For example, with participant p09?

"},{"location":"workshop7/higher-level-analysis/#third-level-analysis-combining-participant-data-from-the-all-in-one-second-level","title":"Third-level analysis - combining participant data from the 'all-in-one' second-level","text":"

We can now estimate the mean group effect by combining across participants from the better second-level analysis we have just calculated above.

Data

In the second-level analysis we just performed we combined data over both participants and runs, effectively collapsing across runs, and the output analyses were then the summary data for each of the 15 participants. Therefore, for this next (third-level) analysis we will again need to combine over the 15 participants.

To do this, follow the steps below:

If you have used the correct naming convention above, this will be:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/level2all.gfeat/cope1.feat/stats/cope1.nii.gz \n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/level2all.gfeat/cope1.feat/stats/cope2.nii.gz \n...\n/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/2/level2all.gfeat/cope1.feat/stats/cope15.nii.gz\n

As before, you should now enter a name for the output analysis directory. As this is a third level model it should go under the directory:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/3 (for all FEAT 3rd level analyses)

and should be meaningfully named. For example, you can then put this current 3rd level output in the subdirectory:

/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/3/GroupAnalysis2

Stats

Post-Stats

Now click the 'Go' button!

Comparing the third-level results

Check through the output when the group analysis has finished. Is the result better than the simple third level analysis above?

If you followed the instructions in workshop materials, you should be able to replicate and see the results as above in the report.html file inside the respective third level analysis folders ('GroupAnalysis1' and 'GroupAnalysis2').

As always, help and further information is also available on the relevant section of the FSL Wiki.

"},{"location":"workshop7/workshop7-intro/","title":"Workshop 7 - Higher-level fMRI analysis","text":"

Welcome to the seventh workshop of the MRICN course!

Prior lectures introduced you to the basic concepts and theory behind higher-level fMRI analysis, including multi-session analysis and the general linear model (GLM). In this workshop you will be learning practical skills in how to run higher level fMRI analysis using FSL tools.

This workshop follows on from the workshop on first-level fMRI analysis. In that workshop you analysed the first level data for 2 participants and at the end of the workshop you were asked to analyse the rest of the scans in the data set. Participants 1-2 had one fMRI experiment run each, participants 3-4 and 6-15 had 2 runs each and participant 5 had 3 runs, so there are a total of 29 runs from the 15 participants.

We will now combine the fMRI data across runs and participants in our second and third-level analyses.

Overview of Workshop 6

Topics for this workshop include:

As in the other workshops we will not discuss in detail why you might choose certain parameters. The aim of this workshop is to familiarise you with some of the available analysis tools. You are encouraged to read the pop-up help throughout (hold your mouse arrow over FSL GUI buttons and menus when setting your FEAT design), refer to your lecture notes lectures or resource list readings.

More information

As always, you can also find more information on running higher-level fMRI analyses on the FSL website.

"},{"location":"workshop8/functional-connectivity/","title":"Functional connectivity analysis of resting-state fMRI data using FSL","text":"

This workshop is based upon the excellent FSL fMRI Resting State Seed-based Connectivity tutorial, which has been adapted to run on the BEAR systems at the University of Birmingham, with some additional content covering Neurosynth.

We will run a group-level functional connectivity analysis on resting-state fMRI data of three participants, specifically examining the functional connectivity of the posterior cingulate cortex (PCC), a region of the default mode network (DMN) that is commonly found to be active in resting-state data.

To do this, we will:

"},{"location":"workshop8/functional-connectivity/#preparing-the-data","title":"Preparing the data","text":"

Navigate to your shared directory within the MRICN folder and copy the data over:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx\ncp -r /rds/projects/c/chechlmy-chbh-mricn/aamir_test/SBC .\ncd SBC\nls\n

You should now see the following:

sub1 sub2 sub3\n

Each of the folders has a single resting-state scan, called sub1.nii.gz,sub2.nii.gz and sub3.nii.gz respectively.

We will now create our seed region for the PCC. To do this, firstly load FSL and fsleyes in the terminal by running:

module load FSL/6.0.5.1-foss-2021a\nmodule load FSLeyes/1.3.3-foss-2021a\n

Check that we are in the correct directory (blah/your_username/SBC):

pwd\n

and create a new directory called seed:

mkdir seed\n

Now when you run ls you should see:

seed sub1 sub2 sub3\n

Lets open FSLeyes:

fsleyes &\n
Creating the PCC mask in FSLeyes

We need to open the standard MNI template brain, select the PCC and make a mask.

Here are the following steps:

  1. Navigate to the top menu and click on File \u279c Add standard and select MNI152_T1_2mm_brain.nii.gz.
  2. When the image is open, click on Settings \u279c Ortho View 1 \u279c Atlases. An atlas panel then opens on the bottom section.
  3. Select Atlas information (if it already hasn't loaded).
  4. Ensure Harvard-Oxford Cortical Structural Atlas is selected.
  5. Go into 'Atlas search' and type cing in the search box. Check the Cingulate Gyrus, posterior division (lower right) so that it is overlaid on the standard brain. (The full name may be obscured, but you can always check which region you have loaded by looking at the panel on the bottom right).

At this point, your window should look something like this:

To save the seed, click the save symbol which is the first of three icons on the bottom left of the window.

The window that opens up should be your project SBC directory. Open into the seed folder and save your seed as PCC.

Extracting the time-series

We now need to binarise the seed and to extract the mean timeseries. To do this, leaving FSLeyes open, go into your terminal (you may have to press Enter if some text about dc.DrawText is there) and type:

cd seed\nfslmaths PCC -thr 0.1 -bin PCC_bin\n

In FSLeyes now click File \u279c Add from file, and select PCC_bin to compare PCC.nii.gz (before binarization) and PCC_bin.nii.gz (after binarization). You should note that the signal values are all 1.0 for the binarized PCC.

You can now close FSLeyes.

For each subject, you want to extract the average time series from the region defined by the PCC mask. To calculate this value for sub1, do the following:

cd ../sub1\nfslmeants -i sub1 -o sub1_PCC.txt -m ../seed/PCC_bin\n

This will generate a file within the sub1 folder called sub1_PCC.txt.

We can have a look at the contents by running cat sub1_PCC.txt. The terminal will print out a list of numbers with the last five being:

20014.25528\n20014.919\n20010.17317\n20030.02886\n20066.05141\n

This is the mean level of 'activity' for the PCC at each time-point.

Now let's repeat this for the other two subjects.

cd ../sub2\nfslmeants -i sub2 -o sub2_PCC.txt -m ../seed/PCC_bin\ncd ../sub3\nfslmeants -i sub3 -o sub3_PCC.txt -m ../seed/PCC_bin\n

Now if you go back to the SBC directory and list all of the files within the subject folders:

cd ..\nls -R\n

You should see the following:

This is all we need to run the subject and group-level analyses using FEAT.

"},{"location":"workshop8/functional-connectivity/#running-the-feat-analyses","title":"Running the FEAT analyses","text":""},{"location":"workshop8/functional-connectivity/#single-subject-analysis","title":"Single-subject analysisExamining the FEAT outputScripting the other two subjects","text":"

Close your terminal, open another one, move to your SBC folder, load FSL and open FEAT:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/SBC\nmodule load bear-apps/2022b\nmodule load FSL/6.0.7.6\nsource $FSLDIR/etc/fslconf/fsl.sh\nFeat &\n

We will run the first-level analysis for sub1. Set-up the following settings in the respective tabs:

Data

Number of inputs:

Output directory:

This is what your data tab should look like (with the input data opened for show).

Pre-stats

The data has already been pre-processed, so just set 'Motion correction' to 'None' and uncheck BET. Your pre-stats should look like this:

Registration

Nothing needs to be changed here.

Stats

Click on 'Full Model Setup' and do the following:

  1. Keep the 'Number of original EVs' as 1.
  2. Type PCC for the 'EV' name.
  3. Select 'Custom (1 entry per volume)' for the 'Basic' shape. Click into the sub1 folder and select sub1_PCC.txt. This is the mean time series of the PCC for sub-001 and is the statistical regressor in our GLM model. This is different from analyses of task-based data which will usually have an events.tsv file with the onset times for each regressor of interest.
  4. Select 'None' for 'Convolution', and uncheck both 'Add temporal derivate' and 'Apply temporal filtering'.

What are we doing specifically?

The first-level analysis will subsequently identify brain voxels that show a significant correlation with the seed (PCC) time series data.

Your window should look like this:

In the same General Linear Model window, click the 'Contrast & F-tests' tab, type PCC in the title, and click 'Done'.

A blue and red design matrix will then be displayed. You can close it.

Post-stats

Nothing needs to be changed here.

You are ready to run the first-level analysis. Click 'Go' to run. On BEAR, this should only take a few minutes.

To actually examine the output, go to the BEAR Portal and at the menu bar select Files \u279c /rds/projects/c/chechlmy-chbh-mricn/

Then go into SBC/sub1.feat, select report.html and click 'View' (top left of the window). Navigate to the 'Post-stats' tab and examine the outputs. It should look like this:

We can now run the second and third subjects. As we only have three subjects, we could manually run the other two by just changing three things:

  1. The fMRI data path
  2. The output directory
  3. The sub_PCC.txt path

Whilst it would probably be quicker to do it manually in this case, it is not practical in other instances (e.g., more subjects, subjects with different number of scans etc.). So, instead we will be scripting the first level FEAT analyses for the other two subjects.

The importance of scripting

Scripting analyses may seem challenging at first, but it is an essential skill of modern neuroimaging research. It enables you to automate repetitive processing steps, dramatically reduces the chance of human error, and ensures your research is reproducible.

To do this, go back into your terminal, you don't need to open a new terminal or close FEAT.

The setup for each analysis is saved as a specific file, the design.fsf file within the FEAT output directory. We can see this by opening the design.fsf file for sub1:

pwd # make sure you are in your SBC directory e.g., blah/xxx/SBC\ncd sub1.feat\ncat design.fsf\n

FEAT acts as a large 'function' with its many variables corresponding to the options that we choose when setting up in the GUI. We just need to change three of these (the three mentioned above). In the design.fsf file this corresponds to:

set fmri(outputdir) \"/rds/projects/c/chechlmy-chbh-mricn/xxx/SBC/sub1\"\nset feat_files(1) \"/rds/projects/c/chechlmy-chbh-mricn/xxx/SBC/sub1/sub1/\"\nset fmri(custom1) \"/rds/projects/c/chechlmy-chbh-mricn/xxx/SBC/sub1/sub1_PCC.txt\"\n

To run the script, please copy the run_feat.sh script into your own SBC directory:

cd ..\npwd # make sure you are in your SBC directory\ncp /rds/projects/c/chechlmy-chbh-mricn/axs2210/SBC/run_feat.sh .\n

Viewing the script

If you would like, you can have a look at the script yourself by typing cat run_bash.sh

The first line #!/bin/bash is always needed to run bash scripts. The rest of the code just replaces the 3 things we wanted to change for the defined subjects, sub2 and sub3.

Run the code (from your SBC directory) by typing bash run_feat.sh. (It will ask you for your University account name, this is your ADF username (axs2210 for me)).

The script should take about 5-10 minutes to run on BEAR.

After it has finished running, have a look at the report.html file for both directories, they should look like this:

sub2

sub3

"},{"location":"workshop8/functional-connectivity/#group-level-analysis","title":"Group-level analysisExamining the output","text":"

Ok, so now that we have our FEAT directories for all three subjects, we can run the group level analysis. Close FEAT and open a new FEAT by running Feat & in your SBC directory.

Here are instructions on how to setup the group-level FEAT:

Data

  1. Change 'First-level analysis' to 'Higher-level analysis'
  2. Keep the default option for 'Inputs are lower-level FEAT directories'.
  3. Keep the 'Number of inputs' as 3.
  4. Click the 'Select FEAT directories'. Click the yellow folder on the right to select the FEAT folder that you had generated from each first-level analysis.

Your window should look like this (before closing the 'Input' window):

\u00a0\u00a0\u00a0\u00a05. Keep 'Use lower-level COPEs' ticked.

\u00a0\u00a0\u00a0\u00a06. In 'Output directory' stay in your current directory (SBC), and in the bottom bar, type in PCC_group at the end of the file path.

Don't worry about it being empty, FSL will fill out the file path for us.

If you click the folder again, it should look similar to this (with your ADF username instead of axs2210):

Stats

  1. Leave the 'Mixed effects: FLAME 1' and click 'Full model setup'.
  2. In the 'General Linear Model' window, name the model 'PCC' and make sure the 'EVs' are all 1s.

The interface should look like this:

After that, click 'Done' and close the GLM design matrix that pops up (you don't need to change anything in the 'Contrasts and F-tests' tab).

Post-stats

  1. Change the Z-threshold from 3.1 to 2.3.

Lowering our statistical threshold

Why do you think we are lowering this to 2.3 in our analysis instead of keeping it at 3.1? The reason is because we only have three subjects, we want to be relatively lenient with our threshold value, otherwise we might not see any activation at all! For group-level analyses with more subjects, we would be more strict.

Click 'Go' to run!

This should only take about 2-3 minutes.

While this is running, you can load the report.html through the file browser as you did for the individual subjects.

Click on the 'Results' tab, and then on 'Lower-level contrast 1 (PCC)'. When the analysis has finished, your results should look like this:

These are voxels demonstrating significant functional connectivity with the PCC at a group-level (Z > 2.3).

So, we have just ran our group-level analysis. Let's have a closer look at the outputted data.

Close FEAT and your terminal, open a new terminal, go to your SBC directory and open FSLeyes:

cd /rds/projects/c/chechlmy-chbh-mricn/xxx/SBC\nmodule load FSL/6.0.5.1-foss-2021a\nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes &\n

In FSLeyes, open up the standard brain (Navigate to the top menu and click on 'File \u279c Add standard' and select MNI152_T1_2mm_brain.nii.gz).

Then add in our contrast image (File \u279c Add from file, and then go into the PCC_group.gfeat and then into cope1.feat and open the file thresh_zstat1.nii.gz).

When opened, change the colour to 'Red-Yellow' and the 'Minimum' up to 2.3 (The max should be around 3.12). If you set the voxel location to [42, 39, 52] your screen should look like this:

This is the map that we saw in the report.html file. In fact we can double check this by changing the voxel co-ordinates to [45, 38, 46].

Our thresholded image in fsleyes

The FEAT output Our image matches the one on the far right below:

"},{"location":"workshop8/functional-connectivity/#bonus-identifying-regions-of-interest-with-atlases-and-neurosynth","title":"Bonus: Identifying regions of interest with atlases and Neurosynth","text":"

So we know which voxels demonstrate significant correlation with the PCC, but what region(s) of the brain are they located in?

Let's go through two ways in which we can work this out.

Firstly, as you have already done in the course, we can simply just overlap an atlas on the image and see which regions the activated voxels fall under.

To do this:

  1. Navigate to the top menu and click on 'Settings \u279c Ortho View 1 \u279c Atlases'.
  2. Then at the bottom middle of the window, select the 'Harvard-Oxford Cortical Structural Atlas' and on the window directly next to it on the right, click 'Show/Hide'.
  3. The atlas should have loaded up but is blocking the voxels. Change the 'Opacity' to about a quarter.

By having a look at the 'Location' window (bottom left) we can now see that significant voxels of activity are mainly found in the:

Right superior lateral occipital cortex

Posterior cingulate cortex (PCC) / precuneus

Alternatively, we can also use Neurosynth, a website where you can get the resting-state functional connectivity of any voxel location or brain region. It does this by extracting data from studies and performing a meta-analysis on brain imaging studies that have results associated with your voxel/region of interest.

About Neurosynth

While Neurosynth has been superseded by Neurosynth Compose we will use the original Neurosynth in this tutorial.

If you click the following link, you will see regions demonstrating significant connectivity with the posterior cingulate.

If you type [46, -70, 32] as co-ordinates in Neurosynth, and then into the MNI co-ordinates section in FSLeyes, not into the voxel location, because Neurosynth works with MNI space, you can see that in both cases the right superior lateral occipital cortex is activated.

Image orientation

Note that the orientations of left and right are different between Neurosynth and FSLeyes!

Neurosynth

FSLeyes

This is a great result given that we only have three subjects!

"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index dc96897..426a2fb 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,98 +2,98 @@ https://chbh-opensource.github.io/mri-on-bear-edu/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/contributors/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/resources/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/setting-up/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop1/intro-to-bluebear/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop1/intro-to-linux/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop1/workshop1-intro/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop2/mri-data-formats/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop2/visualizing-mri-data/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop2/workshop2-intro/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop3/diffusion-intro/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop3/diffusion-mri-analysis/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop3/workshop3-intro/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop4/probabilistic-tractography/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop4/workshop4-intro/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop5/first-level-analysis/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop5/preprocessing/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop5/workshop5-intro/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop6/running-containers/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop6/workshop6-intro/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop7/advanced-fmri-tools/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop7/higher-level-analysis/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop7/workshop7-intro/ - 2025-01-23 + 2025-01-27 https://chbh-opensource.github.io/mri-on-bear-edu/workshop8/functional-connectivity/ - 2025-01-23 + 2025-01-27 \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index c028fdd..5881ed3 100644 Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ diff --git a/stylesheets/extra.css b/stylesheets/extra.css index 885a207..c696dfb 100644 --- a/stylesheets/extra.css +++ b/stylesheets/extra.css @@ -2,6 +2,10 @@ padding-bottom: 4rem; /* Add space for the navigation buttons */ } +.md-main__inner { + padding-bottom: 14rem; /* Fix problematic toolbar issue for last page */ +} + .md-footer-nav { background-color: white; color: var(--md-primary-fg-color); diff --git a/workshop1/intro-to-linux/index.html b/workshop1/intro-to-linux/index.html index 0220671..7b0654f 100644 --- a/workshop1/intro-to-linux/index.html +++ b/workshop1/intro-to-linux/index.html @@ -1502,7 +1502,6 @@

Opening FSL on the BlueBEAR GUIIowa State University also have a course introducing users to UNIX. -

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 01 workshop materials.

diff --git a/workshop1/workshop1-intro/index.html b/workshop1/workshop1-intro/index.html index 4dcef4c..496b05f 100644 --- a/workshop1/workshop1-intro/index.html +++ b/workshop1/workshop1-intro/index.html @@ -1159,7 +1159,6 @@

Workshop 1 - Introduction

Pre-requisites for the workshop

Please ensure that you have completed the 'Setting Up' section of this course, as you will require access to the BEAR Portal for this workshop.

-

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 01 workshop materials.

diff --git a/workshop2/workshop2-intro/index.html b/workshop2/workshop2-intro/index.html index 0527576..bdeae5d 100644 --- a/workshop2/workshop2-intro/index.html +++ b/workshop2/workshop2-intro/index.html @@ -1163,7 +1163,6 @@

Workshop

You have already been given access to the RDS project, rds/projects/c/chechlmy-chbh-mricn. Inside the module’s RDS project, you will find that you have a folder labelled xxx (xxx = University of Birmingham ADF username).

If you navigate to that folder (rds/projects/c/chechlmy-chbh-mricn/xxx), you will be able to perform the various file operations from there during workshops.

-

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 02 workshop materials.

diff --git a/workshop3/workshop3-intro/index.html b/workshop3/workshop3-intro/index.html index 2ba8d05..1bf515e 100644 --- a/workshop3/workshop3-intro/index.html +++ b/workshop3/workshop3-intro/index.html @@ -1157,7 +1157,6 @@

Workshop 3 - Basic diffusion MR

We will be working with various previously acquired datasets (similar to the data acquired during the CHBH MRI Demonstration/Site visit). We will not go into details as to why and how specific sequence parameters and specific values of the default settings have been chosen. Some values should be clear to you from the lectures or assigned on Canvas readings, please check there, or if you are still unclear, feel free to ask.

Note that for your own projects, you are very likely to want to change some of these settings/parameters depending on your study aims and design.

-

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 03 workshop materials.

diff --git a/workshop4/workshop4-intro/index.html b/workshop4/workshop4-intro/index.html index 58b584c..2756907 100644 --- a/workshop4/workshop4-intro/index.html +++ b/workshop4/workshop4-intro/index.html @@ -1167,8 +1167,6 @@

Example of Diffusion MRI analysis pipeline
< TBSS Pipeline

-

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 04 workshop materials.

- diff --git a/workshop5/workshop5-intro/index.html b/workshop5/workshop5-intro/index.html index 30e5896..86de583 100644 --- a/workshop5/workshop5-intro/index.html +++ b/workshop5/workshop5-intro/index.html @@ -1161,7 +1161,6 @@

Workshop 5 - First-level fMRI anal

We will not go into details as to why and how specific values of the default settings have been chosen. Some values should be clear to you from the lectures or resource list readings, please check there or if you are still unclear feel free to ask. We will explore some general examples. Note that for your own projects you are very likely to want to change some of these settings/parameters depending on your study aims and design.

-

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 05 workshop materials.

diff --git a/workshop6/workshop6-intro/index.html b/workshop6/workshop6-intro/index.html index ef765bb..b708034 100644 --- a/workshop6/workshop6-intro/index.html +++ b/workshop6/workshop6-intro/index.html @@ -1157,7 +1157,6 @@

More information

The BEAR Technical Docs provides guidance on submitting jobs to the cluster.

-

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 06 workshop materials.

diff --git a/workshop7/workshop7-intro/index.html b/workshop7/workshop7-intro/index.html index 6c55675..a867c62 100644 --- a/workshop7/workshop7-intro/index.html +++ b/workshop7/workshop7-intro/index.html @@ -1161,7 +1161,6 @@

Workshop 7 - Higher-level fMRI an

More information

As always, you can also find more information on running higher-level fMRI analyses on the FSL website.

-

The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 07 workshop materials.