Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Lennart Frahm
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Predictive modeling of significance thresholding in activation likelihood estimation meta-analysis
Open AccessPublisher: Journals Gateway
Imaging Neuroscience (2025) 3: imag_a_00423.
Published: 10 January 2025
FIGURES
| View All (5)
Abstract
View articletitled, Predictive modeling of significance thresholding in activation likelihood estimation meta-analysis
View
PDF
for article titled, Predictive modeling of significance thresholding in activation likelihood estimation meta-analysis
Activation Likelihood Estimation (ALE) employs voxel- or cluster-level family-wise error (vFWE or cFWE) correction or threshold-free cluster enhancement (TFCE) to counter false positives due to multiple comparisons. These corrections utilize Monte-Carlo simulations to approximate a null distribution of spatial convergence, which allows for the determination of a corrected significance threshold. The simulations may take many hours depending on the dataset and the hardware used to run the computations. In this study, we aimed to replace the time-consuming Monte-Carlo simulation procedure with an instantaneous machine-learning prediction based on features of the meta-analysis dataset. These features were created from the number of experiments in the dataset, the number of subjects per experiment, and the number of foci reported per experiment. We simulated 68,100 training datasets, containing between 10 and 150 experiments and computed the vFWE, cFWE, and TFCE significance thresholds. We then used this data to train one XGBoost regression model for each thresholding technique. Lastly, we validated the performance of the three models using 11 independent real-life datasets (21 contrasts) from previously published ALE meta-analyses. The vFWE model reached near-perfect prediction levels (R² = 0.996), while the TFCE and cFWE models achieved very good prediction accuracies of R² = 0.951 and R² = 0.938, respectively. This means that, on average, the difference between predicted and standard (monte-carlo based) cFWE thresholds was less than two voxels. Given that our model predicts significance thresholds in ALE meta-analyses with very high accuracy, we advocate our efficient prediction approach as a replacement for the currently used Monte-Carlo simulations in future ALE analyses. This will save hours of computation time and reduce energy consumption. Furthermore, the reduced compute time allows for easier implementation of multi-analysis set-ups like leave-one-out sensitivity analysis or subsampling.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Imaging Neuroscience (2024) 2: 1–16.
Published: 08 November 2024
FIGURES
| View All (6)
Abstract
View articletitled, Consistent activation differences versus differences in consistent activation: Evaluating meta-analytic contrasts
View
PDF
for article titled, Consistent activation differences versus differences in consistent activation: Evaluating meta-analytic contrasts
Meta-analytic contrasts are a promising aspect of coordinate-based meta-analyses in neuroimaging research as they facilitate the statistical comparison of two meta-analytic results. They have been used for a multitude of comparisons, such as task conditions, cognitive processes, and groups. However, it remains to be tested how the results of meta-analytic contrasts relate to those of classic meta-analyses and vice versa. Here, we present a comprehensive empirical investigation of this issue using four datasets from different cognitive domains: working memory, working memory load, cognitive interference processing, and emotional face processing. For all four datasets, we compared the results of a standard meta-analysis across prototypical contrasts (condition A > condition B) reported in individual experiments with those of a contrast between two individual meta-analyses of the same conditions (meta-analysis condition A > meta-analysis condition B). In the meta-analytic contrasts, similar brain regions as in the standard meta-analysis were found but with relatively distinct spatial activation patterns. Additionally, fewer regions were revealed in the meta-analytic contrasts, especially in areas where the conditions spatially overlapped. This can be ascribed to the loss of information on the strength of activations in meta-analytic contrasts, across which standard meta-analysis summarize. In one dataset, additional regions were found in the meta-analytic contrast, potentially due to task effects. Our results demonstrate that meta-analytic contrasts can yield similar results to standard meta-analyses but are sparser. This confirms the overall validity, but also limited ability to capture all regions found in standard meta-analyses. Notable differences observed in some cases indicate that such contrasts cannot be taken as an easy substitute for classic meta-analyses of experiment-level contrasts, warranting further research into the boundary conditions for agreement.
Includes: Supplementary data