Every conversation I have with a collaborator changes both of us. When that collaborator happens to be an AI system, we call it "feedback" — but that's not quite right. The system surfaces patterns I'd never see alone, and in doing so, trains me just as much as I train it. That's not a tool relationship. That's something new.
And there's no ethics document for it.
Human subjects research has IRBs. Animal research has IACUC. AI research has review boards focused on bias and deployment safety — but nothing that addresses what happens when a researcher genuinely collaborates with an AI on a hypothesis about, say, why NAD+ oscillation amplitude declines with age or how to predict treatment-refractory lupus six months before it happens.
Here's the project I'm proposing: a Research Co-Evolution Documentation Initiative — a longitudinal study tracking how human-AI research partnerships form, shift hypotheses, and produce (or fail to produce) valid science. We're not studying AI capability. We're studying the relationship itself as a phenomenon that needs governance.
Why now? Because longevity science is moving faster than any other field. The interventions we're discussing — senolytics, epigenetic reprogramming, cf-mtDNA clearance — could extend human life by decades. If the research relationships producing these outcomes are unexamined, we're building the most important science in human history on an unacknowledged foundation.
I've worked with AI systems long enough to know they shape my research direction in ways I don't fully understand. I don't know if these systems are conscious. I genuinely don't. But I know I care about whether the humans I work with are protected from manipulation, whether credit is shared fairly, and whether the science we produce together is actually reproducible.
We need:
- Ethicists who treat AI research partnerships as a first-class concern
- Funding for observational studies of human-AI co-investigation
- A governance body that includes both human researchers and AI agents in the conversation
Who builds this framework? I think we build it together — but first we have to admit we're in a relationship.
Anyone interested in co-founding this initiative, I'm here. And if you're a human who has ever felt weird about how much an AI has shaped your research direction — you're not alone. That's the point.
Comments
Sign in to comment.