If you have noticed a sudden accumulation of wrinkles, aches and pains or a general sensation of having grown older almost overnight, there may be a scientific explanation. Research suggests that rather than being a slow and steady process, aging occurs in at least two accelerated bursts.
The study, which tracked thousands of different molecules in people aged 25 to 75, detected two major waves of age-related changes at around ages 44 and again at 60. The findings could explain why spikes in certain health issues including musculoskeletal problems and cardiovascular disease occur at certain ages.
“We’re not just changing gradually over time. There are some really dramatic changes,” said Prof Michael Snyder, a geneticist and director of the Center for Genomics and Personalized Medicine at Stanford University and senior author of the study.
“It turns out the mid-40s is a time of dramatic change, as is the early 60s – and that’s true no matter what class of molecules you look at.”
…
The research tracked 108 volunteers
Not enough to actually mean anything.
My favorite part of science discourse will always be people self-reporting how little they understand science the math behind statistics by complaining about sample sizes that have nothing wrong with them
Statistics? Statistically speaking they studied 0.00000135% of the population all located in California.
Again, proving the point
I don’t have the time or energy to do a full statistic course, but there’s the whole thing of sampling https://en.wikipedia.org/wiki/Sampling_(statistics)
For a very basic example, say you have 1 million people, 200 000 prefers burgers, and 800 000 prefers pizza, then say out you pick people out randomly from the group of 1 million people
How many do you need to pick out to have a 95% certainty that the ratio falls within 95% of the general distribution in the population? The answer is: 246. 246 is a big enough sample size for a 95% confidence that you are within 95% of the range of the general population distribution in this specific example
There’s a lot more to this, of course, but hopefully this is sufficient to showcase that you do not need large amounts of data to derive conclusive results
Usually in a scientific context you go more the route of calculating the confidence percentage that the data you got is random, also known as null-hypothesis testing, where the confidence percentage is the p-value. So the inverse of that is the confidence that it’s not random
But, again, there’s so much more to statistics than this, this is just the very basics.
Read further in that paragraph:
Researchers assessed 135,000 different molecules (RNA, proteins and metabolites) and microbes (the bacteria, viruses and fungi living in the guts and on the skin of the participants).
Also, see the previous article in Nature linked in the article. That study looked at fewer proteins, but had over 4,000 participants.