#empirical #rate4
- [[representational similarity analysis]] of activation patterns are used to study brain representations
- [[dissimilarity measures]]: which measure to use?
- [[functional magnetic resonance imaging]] vs [[EEG and ERPs]]: representational space vs temporal dynamics
- [[split-half reliability]]
- [[quantifying similarity or dissimilarity between RSA patterns]]
- [[multivariate pattern analysis]], [[multivariate and univariate approaches]]
- [[Guggenmos 2018 multivariate pattern analysis for MEG comparison of dissimilarity measures]]
# Idea
There are different ways to measure pattern dissimilarity and little is known about their relative reliability. The authors compared three measures: [[RSA dissimilarity measure - classification accuracy|classification accuracy]], [[Euclidean distance as a RSA dissimilarity measure|Euclidean distance]] or [[RSA dissimilarity measure - Mahalanobis distance|Mahalanobis distance]], and [[RSA dissimilarity measure - Pearson correlation distance|Pearson correlation distance]].
## Key results
Continuous dissimilarity measures are more reliable than classification accuracy. Reliability can be increased through multivariate noise normalization for all measures. Cross-validated distances provide unbiased estimates of pattern similarity on a ratio scale, providing an interpretable zero point. Cross-validated [[Mahalanobis distance]] is preferable.
## Ideal properties of dissimilarity measures
A meaningful zero point whereby zero indicates that the two patterns are not different. However, distances, by definition, are non-negative and always larger than zero if estimated from noisy data. This positive bias can be removed by [[cross-validation (in- vs out-of-sampling testing)]]. Cross-validated distance estimators are unbiased—expected values equals true distance and is zero is the two patterns are not different. Thus, cross-validated distance estimators let us interpret ratios between distances.
## Mean pattern subtraction (cocktail-blank removal)
Cocktail-blank removal normalization: Subtract the mean pattern (i.e., mean across condition for each voxel, from each response pattern). Removing the mean pattern is very different from removing the mean value (i.e., mean of each condition, averaged across voxels). **Mean pattern subtraction moves the origin of the pattern to lie in the mean pattern of all conditions.**
## Univariate and multivariate noise normalization
Noise is often spatially and temporally correlated. One way is to normalize each voxel by the standard deviation of its residuals, which downweighs noiser voxels. The second option is to not only suppress voxels with high error variance, but also take into account the noise covariance between voxels (i.e., multivariate noise normalization), which results in spatial pre-whitening of the regression coefficients.
Benefits of multivariate noise normalization:
- noise component of voxel response patterns become approximately independent and identically distributed
## Cross-validation produces unbiased estimates
![[Pasted image 135.png]]
## Pattern classifiers
Classifiers: [[Fisher linear discrimant analysis]], [[linear support vector machines]]
## RDM reliability analysis
Good [[dissimilarity measures]] must be reliable. The authors evaluated [[reliability]] with [[split-half reliability]]. Four measures of reliability were used: Spearman correlation, Pearson correlation, Pearson correlation with fixed intercept, one minus the proportion of residual sum-of-squares.
![[Pasted image 141.png]]
## fMRI datasets
![[Pasted image 136.png]]
## RDM split-half reliability results
Multivariate noise normalization improved RDM reliability of all measures over univariately normalized patterns.
Euclidean and correlation distance are similarly reliable. Cross-validation improves rather than impairs RDM reliability.
Continuous distance measures are more reliable and informative relative to classification accuracy (Fig 7).
Correlation distance is sensitive to activation differences.
> p193. All distances and classifiers were applied to the response patterns after no, univariate or multivariate noise normalization. In summary, our results suggest that a) multivariate noise normalization improves the reliability of all dissimilarity measures; b) Euclidean and correlation distance are not significantly different in RDM reliability. However, in the presence of category-selective univariate activation the correlation distance tends to be numerically more reliable; c) crossvalidated distances do not lead to decreased reliability as compared to their noncrossvalidated counterparts; d) discretized classification accuracies are a significantly less reliable dissimilarity measure than continuous distances.
> Together, these results clearly show that normalizing by the estimate of the full noise covariance Σ stabilizes the distance estimates more effectively than univariate normalization.
![[Pasted image 137.png]]
![[Pasted image 138.png]]
![[Pasted image 139.png]]
![[Pasted image 140.png]]
![[Pasted image 142.png]]
# References
- [[Guggenmos 2018 multivariate pattern analysis for MEG comparison of dissimilarity measures]]: MEG version of this study