Delish! Thanks for the Qs
“Aren’t the shorter-lived strains functioning under genetic pressures in order to be short-lived?” They weren’t intentionally bred to be short-lived. It’s more of an unintended consequence. The goal was to create a docile, general-purpose lab mouse, and in the process of enriching for these traits, genetic diversity decreased. This reduction in diversity inadvertently shortened the lifespan in certain strains.
“From a research perspective, this makes sense as conducting studies through end-of-life would be more exhaustive if longer-lived strains were used.” I see your point but the actual difference in lifespan is only about 0.5 to 1 year—so not as big a difference as it might seem when considering the added effort for end-of-life studies or even just dealing with the mice that have several more months of health/life. To take your numbers, it would only be 110 days which is less than half a year.
“Outside of longevity, it would be better to use short-lived models.” Not necessarily. For example, heart disease is heart disease, and you don’t need to artificially impose unrelated lifespan limits to study it effectively. Long-lived models can still provide meaningful data on a variety of conditions without the confounding factor of an “unnaturally” short lifespan.
“Any intervention would undoubtedly help a short-lived strain…” That depends. For instance, if a strain is highly susceptible to cancer, interventions targeting cancer might extend its lifespan. However, if the strain tends to die of kidney disease, cancer therapeutics won’t affect longevity. The effectiveness of an intervention varies depending on the underlying/predominant cause of death in these strains.
“It would essentially be undoing years of genetic constraints that caused them to be short-lived in the first place.” Exactly—this is what I was getting at. In the study we’re discussing, the intervention not only had to counteract these added genetic or environmental stresses but also extend lifespan beyond the norm for long-lived strains. That’s what makes the result more meaningful in a way.
“There seems to be an invisible, yet squishy ceiling on lifespan up to a certain age with interventions…” The point I was trying to make was that the gene therapy in this case surpassed both the softer and harder limits you are referring to, suggesting that the therapy had a significant impact not just on addressing the deficits these animals had but also pushed these shorter lived animals past the hard ceiling for longevity set by the (theoretical) long-lived controls.
TIL what a true phantom is. Neat!
I am not terribly adjacent to radiology but do find this niche product fascinating, Thank you!
I’d like to add something to this discussion, but first, I want to acknowledge that all of these points are correct, important, and well taken. That said, a subpar control doesn’t necessarily equate to a subpar study or suggest that an intervention isn’t worth getting excited about. What the “900-day rule” indicates is what the expected median lifespan of a healthy control group should be. Control groups that fall short of this 900 day benchmark are facing an additional stressor (genetic or otherwise) that is negatively affecting their longevity. When comparing the experimental arm to these controls, the conclusion must include that the intervention is at least partially increasing their lifespans by counteracting these added stressors.
So, a simple way around throwing the entire study out would be to compare the experimental arm to a theoretical 900-day cohort. If the intervention group has a median lifespan of around 37 months, that translates to 1,125 days—about a 25% increase over the theoretical, normal, healthy 900-day control group. Yes, 25% is less dramatic than 41%, and it may not be as robust as some rapamycin results, but it is still a significant increase in longevity compared to both a healthy control group and the in-study control that shows evidence of stressors affecting all mice.
I argue that the utility of the “900-day rule” isn’t to dismiss studies that don’t meet this benchmark, but rather to provide another metric to aid in our interpretation of the data.
Great post!
My suspicion is that a big part of this discussion revolves around integrating vs. non-integrating gene therapies, so let’s start there.
At a high level, viral gene therapies use a viral vector (the capsid or container) to deliver a genetic payload into a cell. That payload can then integrate into the host’s genome if the lysogenic machinery is part of the payload—but that doesn’t necessarily have to be the case.
Plasmid therapies, on the other hand, involve non-chromosomal DNA that stays outside the host’s genome but can still express the proteins it encodes independently. In most cases, plasmids don’t come with the machinery that promotes integration with the host genome, but it’s not an absolute safeguard against integration either. Additionally, a plasmid cargo still needs a vector to gain access into the cell.
In the follistatin gene therapy paper, the authors use a plasmid (the genetic cargo) to encode instructions for follistatin. They deliver this via polyethyleneimine (PEI), a cationic polymer that helps get the plasmid into cells—so PEI acts as the vector here, instead of a virus.
Now, the superiority of either approach really depends on the use case:
Integrating Gene Therapies (more commonly viral-based, since many naturally have integrating machinery that can be included as part of the cargo) are ideal when you want a one-time, permanent fix—for example, in conditions like sickle cell anemia, where a single gene mutation needs to be corrected. In this case, you’d want the therapeutic gene to integrate into the genome for long-term expression and potentially a cure with just one treatment.
Non-integrating Therapies (more commonly plasmid + non-viral vector based) are ‘better’ when you want temporary gene expression. For example, if you’re priming the body to fight a new pathogen or delivering a protein with a temporary therapeutic effect, plasmid-based therapies are argued to be more practical. These are also great for delivering proteins that need short-term action but shouldn’t stick around indefinitely, especially if there’s a risk of side effects from prolonged exposure.
That said, I don’t see why viral/non-plasmid strategies couldn’t do these things as well. In fact, many such strategies are in development.
Other Considerations for Viral vs. Plasmid-Based Therapies: Viral Vectors: These also come with higher risks like immune responses, insertional mutagenesis (which can potentially lead to cancers), and limited payload sizes. There are some neat solutions to these in the research sector that we should chat about in the future.
Plasmid Vectors: Generally less immunogenic, but they offer shorter-lived expression, meaning you might need repeated doses to maintain effects. The big benefit in my opinion is they deliver a much larger payload when compared to viruses. Not relevant if you are aiming for a single gene therapeutic but I feel it’s the big draw.
Now, About the Follistatin Paper… I’ll hold back some of my critiques of the paper that are beyond the scope of your question, but let me address the safety aspects they mention:
Inherently Transient Expression: This is generally true for plasmids since they don’t integrate into the genome. However, I’m cautious about saying this is 100% guaranteed. There’s always a small risk of integration, even with non-integrating strategies, although the probability is low.
Drug-Inducible Reversibility: The paper mentions this, but it’s not clear how exactly they plan to achieve it. They didn’t include details about the plasmid construct or any antibiotic kill switch, which would be crucial to back up their claim. If such a switch were tied to any potential integrations, in theory, it could allow them to kill off any cells where integration occurred—but more details are needed here. This strategy also isn’t 100% effective, by the way.
Excision of Transfected Tissue: This one made me laugh a bit—“Oops, we made a tumor—CUT IT OUT!” Brilliant and novel, guys. Thanks for mentioning it. While theoretically possible, it doesn’t seem like a reasonable safety net for a clinical approach. Given that cancer development is one of the big concerns with these therapies, and cancer is notoriously slippery, this doesn’t offer much reassurance.
In my opinion, the advantages of plasmids mentioned in the paper could also apply to viral vectors.
So, Where Do I Stand? Both viral and plasmid approaches have their place, and the choice really depends on the situation and how the technology evolves. I suspect that in the long term, viral vectors will be the better choice, despite their risks. There’s a lot of work going into custom capsid design, which will allow for specific targeting and immune evasion. I think the idea that plasmid-based therapies are “safer” may be leading to a false sense of security.
That said, I’m definitely flirting with both. Can you ask me again in 5 years? Maybe 10?
What are your thoughts?
Great question! I don’t want to downplay the utility of multiplex PCR—we have in-house panels that we frequently rely on. However, there are two key drawbacks: cost and breadth. The reagents for these assays are quite expensive, and they can only detect what is on the panel, which is dictated by the species-specific primers. We use the BioFire system here, which you can look up if you’re curious about the panels. Another sequence-based option would be using assays like Karius (also Google-able), which is an unbiased approach that detects microbial cell-free DNA and attempts to match it to a library. When it first came out, Karius was supposed to revolutionize infectious disease diagnostics but failed to gain strong footing due to its cost, turnaround time, and the ambiguity of the data you get back.
MALDI-TOF proteomics is the gold standard because it’s fast, cost-effective, and requires minimal sample preparation compared to sequencing.
MALDI-TOF is not highly targeted in the sense of picking specific proteins of interest. Instead, it generates a broad mass spectrum “fingerprint” of all the proteins (primarily ribosomal proteins) present in the organism (we can do fungi too). The key is that the spectrum is matched against a reference database of known profiles. So, it’s a comparative method, rather than specifically aiming for certain proteins. The spectra tend to be consistent and reproducible for each species, which is why it works so well for identification. The reference library is massive and constantly growing with more samples, so generally speaking, you are not restricted to a panel of select organisms (there are caveats to this, but you know, generally speaking).
Typically, there are about 10-20 prominent proteins, with most of these being small, abundant proteins like ribosomal proteins. These are what the machine “sees” best and uses to generate the profile. It’s not that we have ‘proteins of interest’ per se—it’s more that each organism presents a predictable set of proteins to the MALDI. If we know that, we can identify the organism. For many organisms, they present ribosomal proteins, which is convenient because ribosomes are a classic marker for identifying organisms through speciation. However, some organisms present other proteins as well.
Let me know if you have any other questions!
Yes, that is exactly why! The safety concerns around using spectrometry for anthrax primarily stem from how the samples are handled and prepared. Nuance incoming!
The dogma in our lab is that mass spectrometry, especially MALDI-TOF, involves creating an aerosol or vapor from the sample, which could potentially release live spores or other dangerous particles into the environment. In the case of anthrax, because it’s a highly infectious pathogen, this aerosolization could pose serious biohazard risks if the spores aren’t completely neutralized.
In reality, it’s much more likely that the true concern lies in the upstream processing. In fact, many labs have the capacity to, and ultimately do, run anthrax samples on the MALDI. This is because the samples are chemically deactivated with reagents like trifluoroacetic acid and α-cyano-4-hydroxycinnamic acid, which also aid in the production of adduct ions that are ultimately detected by the machine.
A key difference between most hospital microbiology labs is the biosafety classification. At my location, for example, the only part of the lab that is rated Biosafety Level (BSL) 2 is the mycology suite. To handle anthrax safely, you would want manipulations performed in a BSL-3 lab within a class 2 safety cabinet, which is what the reference labs would do. Then, once the sample is inactivated, they proceed to MALDI. In hospital labs, we usually limit our manipulations of possible anthrax and therefore use quick assays to rule it out. If we can’t, we send it to other labs… through the mail… there may be a dark joke somewhere in there.
Fun fact: most of Robert Koch’s (a, if not the, father of germ theory) early work was actually with the anthrax bacillus, long before our BSL equipment existed!
Thank you for the insightful question! It allows me to emphasize the important distinction between ketosis and atherosclerosis. Foam cell formation, which is a key event in the development of atherosclerotic plaques, is not limited to individuals with risk factors for heart disease. There’s evidence showing that plaque precursors, including foam cells, can be found even in healthy adolescents. This suggests that the initial stages of atherosclerosis might occur as part of natural biological processes, but the progression to harmful plaque formation depends on various factors, including lifestyle, genetics, and environmental triggers.
As for ketosis, the metabolic state is designed to utilize fat as a primary energy source. In individuals with excess body fat or those who have unfavorable lipid profiles, entering ketosis allows them to metabolize their more abundant fat stores, often resulting in improved lipid biomarkers and overall metabolic health. However, in individuals with a leaner, “healthy” phenotype and lower fat reserves, the situation is different.
When these individuals enter ketosis, if their body fat is below a certain threshold, their body may need to either mobilize fat from existing stores or synthesize new fats to provide substrates for ketone production. This process can lead to a temporary or sustained worsening of lipid biomarkers as the body shifts its fat metabolism pathways. This difference in how ketosis is utilized likely explains the variation in biomarkers you mentioned between individuals with different body compositions.