The Translational Neuroimaging Laboratory – Michael L. Lipton, M.D., Ph.D.

Frequently Asked Questions (click on a question to view the answer):


In every (or most) other book, there's a sentence "lower-energy state is slightly favored, so that there are more protons spinning parallel than anti-parallel" - why is it different in this video (more anti-parallel than parallel)?

Answer: It is true that in a hypothetical system where we are observing the behavior of hydrogen nuclei (spins) in the presence of an externally applied magnetic field (B0), that the spins will be more likely to assume the same orientation as B0. Thus, the net magnetization present (i.e., the NMV) without any other perturbation will have the same orientation as B0. This, I believe, is what you are all expecting. However, this scenario ONLY pertains in the absence of other effects on the magnetic field B0. Specifically, in a diamagnetic environment (i.e., where the other stuff in the sample, aside from spins, has magnetic susceptibility <0) the resting state is altered so that the preferred (lower energy) orientation is opposite (“antiparallel”) to B0. Biological tissues in general and human beings in particular are highly diamagnetic environments. Thus, in a real life clinical imaging scenario, the resting NMV will have an antiparallel orientation.


Explain the color coding on the type of fMRI Activation images that you showed as the visual cortex example.  

Answer: The color scale reflects t-values in the image I displayed. It could reflect any statistic comparing the signal and stimulus paradigms. 


In context of the inversion recovery section, you talked about the timing of the RF pulse application. My question from the graph shown below (similar to one you had drawn during discussion) is:  In terms of signal intensity, what would the points/tissues 1,2 and 3 look like in a grayscale image? If 2 looks completely black and the 1 shows a decent longitudinal recovery and eventually a greater transverse magnetization after the flip, I was not sure what 3 would look like when apparently it has the same magnitude as 1 but in the opposite direction! 

Answer: lab group1 and 3 would have almost equal signal, as it is the magnitude of the Mz which matters. Regardless of its orientation (up or down) Mz is rotated into the transverse plane, producing Mt, when the 90 RF is applied. 



I was wondering if you could provide an explanation as to why Fat Sat 3D SPGR would be useful for distinguishing cartilage lesion and bursal fluid. Is it because the flip angle (Ernst angle) can be adjusted so that cartilage would show high signal and fluid would show low signal (water suppression and far suppression)? If that is the case, what is the difference between free water (in bursa or synovium) and water inside a cartilage (which has type 2 collagen)?

Answer: This is because we face dilemma every time we see osteochondroma with thick area of cap of high signals on T2 weighted images and have to decide whether it is a thick cap (suggesting malignant transformation) or a thin cap with formation of bursal fluid (due to irritation).   

Adjusting the flip angle will not create tissue specific suppression, just modulation of T1-based contrast. Thus, change of the contrast between two tissues, such as fluid and cartilage that you are interested in, on an FSPGR image where they only change is the flip angle, would be due to the differences in the T1 of the tissues.T1 is a function of the efficiency of energy transfer from 1H nuclei to the ³lattice² which is all of the other stuff in the tissue. Since synovial fluid and cartilage have dramatically different compositions, with much more molecular density and complexity if cartilage, the T1 of cartilage would be much shorter and it should thus appear darker than synovial fluid. This, of course depends on the nature of the synovial fluid (e.g., purulence). 


Specifically, in the context of GRE EPI sequences in fMRI I have often wondered about the following: If EPI images can be acquired in as little as 2 ms (as you mention in the course), then why do we do whole brain imaging at TRs of around 2 seconds? In other words, why is not possible to acquire a whole brain with 30 slices in 60 ms using EPI? Given these 2 s TRs, it seems that we need to wait around 50 ms after the acquisition of each slice before we can acquire the next slice. I am not sure why we need to do this. Is it just to make sure that the signal is completely gone before we initiate the next RF pulse and measure the signal from the next slice? So the delay is there to prevent mixing signals due to the t2-star decay of the signal?  

Answer: Saturation, as you suggest, might be a concern, but since slice data acquisition is interleaved and each slice is sampled within a single TR, this is not likely to be the issue. The limitation is typically hardware speed. The scanners will not support faster acquisition due to gradient duty cycle limits.   


When discussing gradients inherent in the system, I know along the z-axis the force is moving in opposite directions having 0 impact on isocenter. A question is posed about the other gradients. Am I right in thinking the gradients along the x- and y- axis are causing  a force in opposite directions as well, with the same result that isocenter is not affected?   

Answer: ALL three gradient magnetic field have zero amplitude at isocenter, such that the net magnetic field strength at isocenter is always =B0, REGARDLESS of how much amplitude any of the gradient magnetic fields have.   


Do the non-Z direction gradient magnetic fields alter the orientation of B0, even to a tiny extent?  

Answer: No, because the gradients along directions orthogonal to Z employ magnetic fields applied equidistant from isocenter, but with orientations parallel to Z. Thus, the net gradient magnetic field at any location is a vector parallel to Z and the vector sum of B0 and the net gradient magnetic field(s) has the same orientation as B0.   


Looking at several sources, it seems T2 is the time it takes to drop down to 37% (which means losing 63%) of what your initial magnetization was. The video says we lose 37% (which would imply we have 63% of our initial leftover). Also, just for completeness, an earlier video in this series states the net magnetization vector (NMV) of spins due to external magnetic field (B0) is in the anti-parallel direction. Every other source I've looked at has said the NMV is actually in the parallel direction.   

Answer: You are correct that the time T2 is the time during which 63% of net MT dissipates. That is, after one time period = T2, 37% of the MT that was present initially remains. This is, in fact, how it is stated in my book. I am not sure what caused me to misspeak in this segment (cosmic ray, stage fright, full moon...), but in any case I apologize for the confusion. I am glad to see you catch me!   


Once you enact the phase encoding gradient and then sample at TE with the frequency encoding gradient, does the sample/slice/slab have discrete spin identifiers at each identifiable location along each column and row? I.E.: Row 1 has a specific phase, but frequency identification changes along the row. Row 2 would have different phase identity than row 1 (or any other row for that matter) and the frequency identification changes along the row. This would be the same for the entire sample. Why do we need to repeat the phase encoding process when it seems as though enough information is present after doing it one time?   

Answer: This is something many people struggle with. First, your premises are correct; the reason that a single application of the phase encode gradient is not sufficient is NOT because phase does not vary across this dimension each time we apply the gradient. It is because  we need a way to detect the phase difference. This is done with the Fourier transform, which requires multiple samples of the MR signal that differ in the way the signal is encoded along the dimension we are trying to discriminate.   


How are the k-space data (real part, imaginary part, amplitude, frequency and phase) related to the RF signal (amplitude, frequency and phase) response coming out of the tissues?   

Answer: The MR signal is a time varying magnetic field, which has amplitude, frequency and phase and induces a time varying electrical field in the receiver coil, which also has amplitude, frequency and phase. This analog signal is sampled digitally over time using an A2D. Thus, while the overall signal does have amplitude, frequency and phase, each digital sample is simply a measure of amplitude collected at a point in time. Actually, the signal is recorded is the envelope of amplitude present over a period of time during which we record one sample (I.e., Ts). This measure of signal amplitude is written to a point in memory and the points in memory are in time sequence. This is called the time domain data. As I repeatedly emphasize in the videos, each sample derives from the entire MR signal, which arises from the entire slice we have excited. Thus, none of these samples correspond to any specific spatial location in the slice. Spatial information must be extracted by the Fourier transform.


When the signal is sampled using two coils (e.g., a quadrature coil to improve SNR), we actually have two signals, which are phase shifted. These are traditionally referred to as the “real” and “imaginary” components and their vector sum is the magnitude of the net MR signal. This magnitude signal is what is written into each point in k-space and, consequently, there is one point in k-space for each sample recorded in the time domain data. Thus, the k-space samples could be plotted to approximate the frequency and phase of the original analog signal. Note that any given data point in k-space does not itself contain frequency or phase information, only amplitude. I addition to the combination of component (e.g. Real and imaginary) signals, other processing such as filtering may be applied to the MR signal before k-space has the form on which we apply the Fourier transform.


Lastly, the phase of the MR signal can be computed from the two components (real and imaginary) to quantify the phase of the signal. If this information is entered into k-space (i.e., the value recorded in k-space is the computed phase), an image can be created that reflects phase of the MR signal at each voxel.


For an excellent summary, see Allen Elster’s discussion here.  



I came across a practice question that said that decreasing the receiver bandwidth increases imaging time (in a similar way NEX, TR and # of phase encoding steps does; it was not referring to sampling time). Is this true and if so can you please explain it?    

Answer: This really depends on the pulse sequence. Decreasing the BW by definition means that the time for each sample (Ts) increases. As a result, for the same number of samples (i.e., “ frequency encoding steps) the overall time to sample a line of k-space increases. This can impact the shortest achievable TE because the time between excitation and the center of the sampling time cannot be made as short as with a higher BW (i.e., shorter Ts). In most applications this does not impact overall acquisition time because the TE is so much shorter than TR. In very short TR scenarios, such as fast GRE, SSFP or single shot acquisitions, it is possible that the shortest achievable TR might increase with a decrease in BW. This is a matter of how much can be crammed into the time between one excitation and the next (I.e., TR).   


In the course, you say that there are slightly more spins antiparallel to the static magnetic field when there is complete longitudinal relaxation. I have seen elsewhere that it is the parallel orientation that has the lower energy state and is therefore the orientation of the few extra spins. Can you please clarify?    

Answer: This is a common point of confusion. In an idealized scenario, where we hypothetically observe the behavior of pure 1H nuclei with no gradients of magnetic susceptibility, the case described by what you “have seen elsewhere” would pertain. In biological tissues, which are diamagnetic (i.e., magnetic susceptibility is less than 0), however, the case is actually as I explain it. I take this approach as it reflects the reality of clinical MRI. In any case, this is really an esoteric point that should not matter much to your understanding of MRI.   


My impression is that transverse precession causes signals in coils, and those signals somehow get translated to k-space. Is there a dataset out there of raw coil readings?    

Answer: You are correct if you mean that the raw time-domain signal induced in the receive coil is processed in some ways prior to k-space. This typically includes preamplification, filtering and combining of real/imaginary components, among other things. I do not know offhand where you can download sample data of this type, but you might try some of the basic MR research groups such as MGH, Wash U St Luis or U Minnesota.  



When you give a 90-degree pulse, wouldn’t both longitudinal vectors come to 0?    

Answer: If you are asking why the Mz graph on the top does not go to zero at the vertical black line, that is simply because I did not update the upper graph or discuss what happens to Mz. The discussion was focused on the consequences for Mt and I had not bothered to update the upper graph.  Apologies if this was confusing.  



In regards to in-phase and out-phase imaging: with this 180 RF pulse, will water and fat proton transverse magnetization rephrase instead? Will Dixon’s Method be used only in multiple gradient echo (MGE) imaging?    

Answer: You are correct that in a properly designed spin echo pulse sequence, where sampling occurs at the moment 2*(TE/2), fat and water will be in phase. This is because the fat/water chemical shift IS a T2’ effect that may be compensated by the spin echo. To achieve out of phase images, the timing of TE would have to be altered. GRE is most widely used for Dixon imaging, but spin echo-based methods have also been created.  



Let's assume we are acquiring images for, let's say, a head and we start a multi-spin echo sequence to acquire a T2 contrast weighted series of images with 10 TE's within each TR. After the sequence is finished we may end up with 12 axial images of the brain generated by that particular sequence. All of the images look the same, as in each one looks to be similarly T2 weighted. Each TE within the TR of that particular sequence acquires information with different contrast as answered above. I suppose the answer may be that the 10 TE's are just averaged out each time for each slice. Is this the right way of understanding this? So we are, indeed, limited on how many TE's we can acquire within a TR, not only because of time, but because of the average weighting we are trying to acquire, right? Too many TE's on either side of a point may dilute and/or overpower a particular optimum T2 contrast weight? I hope I'm at least slightly coherent with my thoughts on this. I suppose this kind of thought process would apply to any (or most) multi- TE sequence, yes?    

Answer: In multi-echo imaging AND in multi-slice imaging AND when multi-echo and multi-slice are combined, each TE contributes a single line of k-space to a single image per TR. In the following example:

Excite Slice #1 >> 180 >> TE-a >> 180 >> TE-b | Excite Slice #2 >> 180 >> TE-a >> 180 >> TE-b | …….TR Excite Slice #1….


The above is repeated at TR for the number of phase encoding steps required (Np)


We will generate a single line of data for the following 4 images:


Slice #1/TE-a

Slice #1/TE-b

Slice #2/TE-a

Slice #2/TE-b


Slices 1, 2 with TE-b will represent the same anatomy, at greater T2 contrast, compared to Slices 1 and 2 with TE-a.






Click here to log in