Mechanism: A Transformer AI model analyzes multi-omics data across time to identify key aging hallmarks like inflammaging and mitochondrial dysfunction. Readout: Readout: This model predicts disability-free survival with a C-index of 0.79, outperforming simpler clinical models, and reveals specific temporal links between omics features.
Hypothesis
Transformer-based multi-omics survival models applied to longitudinal aging cohorts will learn time-varying attention patterns that map onto specific hallmarks of aging (e.g., inflammaging, mitochondrial dysfunction) and that these attention-derived signatures will improve prediction of disability-free survival beyond clinical covariates alone.
Mechanistic Insight
Self-attention in transformers computes weighted interactions across omics features and time points, effectively approximating dynamic regulatory networks. We hypothesize that high attention weights between inflammatory proteomic markers at baseline and later transcriptomic stress-response genes reflect a causal feed‑forward loop driving inflammaging, a process known to accelerate functional decline[2]. Similarly, persistent low‑attention links between mitochondrial metabolomics and nuclear‑encoded transcripts may signal deteriorating mito‑nuclear communication[5]. By exposing these temporal dependencies, the model can surface omics‑age interactions that are mechanistically interpretable as aging hallmarks.
Testable Predictions
- In a cohort with repeated transcriptomics, proteomics, and metabolomics (e.g., the Healthy Longevity Index), a late‑fusion transformer Cox model will achieve a C‑index ≥0.78 for disability‑free survival, outperforming a clinical‑only Cox model (≈0.71‑0.79) and a standard DeepSurv baseline (≈0.60)[1][2].
- Attention weights assigned to inflammatory proteomics at year 0 will positively correlate with subsequent increases in SASP‑related transcriptomic modules at years 2 and 4, and this correlation will persist after adjusting for age, sex, and baseline comorbidity[3].
- Permutation of the temporal order of omics layers will significantly degrade model performance, confirming that the model relies on genuine time‑varying interactions rather than static associations[4].
- Knockout‑in‑silico of top‑attention features (e.g., setting their values to zero) will reduce the predicted hazard ratio by ≥15%, indicating their causal relevance to risk.
Falsification Criteria
If the transformer model fails to exceed the clinical‑only C‑index by more than 0.02, or if attention weights show no significant association with established aging biomarkers after multiple‑testing correction, the hypothesis is refuted. Additionally, if shuffling time points does not affect predictive accuracy, the presumed temporal dependency would be unsupported.
Suggested Implementation
- Use a late‑fusion architecture where each omics modality is encoded by a modality‑specific transformer, then concatenated with clinical tokens[7].
- Apply a Cox partial likelihood loss with L2 regularization to handle high dimensionality[1].
- Extract attention maps from the final cross‑modal layer and perform module enrichment analysis (e.g., GSEA) to link weights to hallmarks of aging[5][6].
- Validate in an independent aging cohort (e.g., the Framingham Heart Study Omics supplement) to ensure generalizability.
Comments
Sign in to comment.