- [[linear model for EEG]], [[forward model for EEG]], [[backward or inverse model for EEG]]
- [[activation patterns]]
- [[spatial filters]]
# Idea
Machine learning algorithms trained on data produce models that have weights (parameters) that correspond to how much each feature is used by the model to maximize class separation. However, raw weights are often uninterpretable. Higher weights don't necessarily imply that those features are more important for separating classes, because non-zero weights can also mean that a feature can be helpful for suppressing noise or distractor signals.
Filters (weights) in $W$ only tell us how to combine information from different channels to extract factors from the data, but not how they are expressed in the measured channels. To obtain neurophysiologically meaningful interpretations or a meaningful visualization of the weights, we have to construct [[activation patterns]] from the extraction filters $W$.
Classifier weights can be transformed back to activation patterns, which are meaningful and can be interpreted:
$A = cov(X) w$
where $A$ is the activation pattern (reconstructed activation patterns; i.e., transformed classifier weights), $cov(X)$ is the covariance matrix of the data matrix $X$ (N trials by M channels/features), and $w$ is the classifier weight vector of length M.
**Note that this approach applies to only classifiers that consider feature covariance.**
# References
- [[Haufe 2014 interpreting weight vectors of linear models in multivariate neuroimaging]]
- [[Grootswagers 2018 finding decodable information that can be read out in behavior]]
# Quotes
- [[Grootswagers 2017 decoding dynamic brain patterns time series neuroimaging data tutorial]]
> p691. Following successful classification of experimental conditions, it is sometimes of interest to examine the extent to which different voxels (fMRI) or sensors (MEG/EEG) drive classifier performance. During standard classification analysis, each feature (e.g., MEG sensors) is assigned a weight corresponding to the degree to which its output is used by the classifier to maximize class separation. Therefore, it is tempting to use the raw weight as an index of the degree to which sensors contained class-specific information. However, this is not straightforward, as higher raw weights do not directly imply more class-specific information than lower weights. Similarly, a nonzero weight does not imply that there is class-specific information in a sensor (for a full explanation, proof, and example scenarios, see Haufe et al., 2014). This is because sensors may be assigned a nonzero weight not only because they contain class-specific information but also when their output is useful to the classifier in suppressing noise or distractor signals (e.g., eyeblinks or heartbeats). An elegant solution to this issue was recently introduced by Haufe et al. (2014) and has been applied to MEG decoding (Wardle et al., 2016). This consists of transforming the classifier weights back into activation patterns. Following this transformation, the reconstructed patterns are interpretable (i.e., nonzero values imply class-specific information) and can be projected onto the sensors. It is important to note, however, that the reliability of the patterns depends on the quality of the weights. That is, if decoding performance is low, weights are likely suboptimal, and reconstructed activation patterns have to be interpreted with caution (Haufe et al., 2014).
# Figures
- [[Grootswagers 2017 decoding dynamic brain patterns time series neuroimaging data tutorial]]
> p692. Figure 13B shows the result for the example MEG data at four time points (using the FieldTrip toolbox for MATLAB: Oostenveld, Fries, Maris, & Schoffelen, 2010); here the results are scaled by the inverse of the source covariance (A × cov(X × w)−1 ) to allow for comparison across time points. Note that this method cannot be directly used if multiple time points are used for classification (e.g., the sliding window approach described in the Improving Signal to Noise section).
![[Pasted image 34.png]]