#rate3 #empirical - [[representational similarity analysis]] - [[cross-validation (in- vs out-of-sampling testing)]] - [[multivariate pattern analysis]], [[multivariate and univariate approaches]], [[Fisher linear discrimant analysis|LDA]] # Idea How do different [[dissimilarity measures]] perform for [[MEG]] [[multivariate pattern analysis]] and [[representational similarity analysis]] analyses? The authors tested different [[dissimilarity measures]] comprising classifiers (LDA, SVM, WeiRD, Gaussian Naive Bayes) and distance measures ([[Euclidean distance]], [[RSA dissimilarity measure - Pearson correlation distance]]). Multivariate noise normalization improved decoding accuracies and [[reliability]] of [[dissimilarity measures]]. LDA, SVM, and WeiRD (weighted robust distance) yielded peak decoding accuracies and nearly identical time courses. The cross-validated [[Euclidean distance]] is a reliable and unbiased default choice for [[representational similarity analysis]]. Decision-value weighting of decoding accuracy is good for classification-based [[representational similarity analysis]]. > p444. We assessed and compared the reliability of dissimilarity measures for representational similarity analysis of MEG data. In brief, we found that 1) multivariate noise normalisation of the data strongly improved the accuracy of classifiers and the reliability of all dissimilarity measures, 2) distances were in general superior to classifiers in terms of pattern reliability, a difference that 3) could be largely ameliorated through decision-value weighting of decoding accuracies, 4) in terms of reliability the Euclidean metric was en par with or better than the Pearson metric when correcting for condition-nonspecific response components, 5) cross-validation provided robust unbiased distance estimates for the Euclidean distance, but came at the cost of slight reliability reductions and was unstable for the Pearson distance, and 6) within-class correction addressed the problematic influence of condition-nonspecific response components on Pearson distances. ## Covariance estimation methods The covariance matrix, $\Sigma$ for multivariate noise normalization was computed using different methods: - using baseline data - using the full epoch - separately for each time point Note that [[shrinkage]] transformation was applied to the covariance matrix to address [[rank deficiency]] problem. Shrinkage improved performance of all normalization methods. The best methods used either the epoch or time-point method for computing covariance. ## Removing the mean pattern Cocktail-blank removal was used to remove condition-non-specific response components. See also [[Walther 2016 reliability of dissimilarity measures for MVPA]]. ## Dissimilarity measures The goal of all [[dissimilarity measures]] is to compute a distance, $d(x,y)$, between each pairwise combination of conditions, where $x$ and $y$ represent the measured activity associated with two experimental conditions. Measured used: - decoding accuracy: [[Fisher linear discrimant analysis]], [[linear support vector machines]], [[Gaussian naive Bayes]], [[weighted robust distance]] - decision-weighted decoding accuracy: classifiers' outputs are discretized, resulting in information loss, but these classifiers provide some form of internal continuous [[decision value - machine learning classifier output|decision value]] that can be used to address this problem (i.e, probability for predicted class minus chance level) ## Cross-validated distances The problem with non-cross-validated distances is that they capture not just true underlying distance, but also noise (because of overfitting). Cross-validated distances provide unbiased estimates of true dissimilarity. See also [[Walther 2016 reliability of dissimilarity measures for MVPA]]. ![[Pasted image 144.png]] ## Within-class-corrected distances Removes condition-nonspecific responses to provide unbiased distance estimates. ## Reliability measures I can use this approach to test whether [[representational dissimilarity matrix]] are different or how different. #idea ![[Pasted image 145.png]] ## Results: decoding accuracy Multivariate noise normalization is highly recommended because it improves decoding accuracy for all classifiers. ![[Pasted image 146.png]] ## Distance measures Cocktail-blank removal helped to removed the mean pattern (condition-nonspecific signals) from all conditions (Fig 3C, 3D). ![[Pasted image 147.png]] ## Within-class correction Within-class correction eliminates condition-nonspecific components from activation patterns. It helps to remove signal and noise components unrelated to the difference between conditions. ![[Pasted image 148.png]] ## Reliability Compared pattern reliability and SSQ reliability. ![[Pasted image 149.png]] ## Multivariate noise normalization improves reliability ![[Pasted image 150.png]] # References - [[Walther 2016 reliability of dissimilarity measures for MVPA]]: fMRI version of this study