Popular science websites or lifestyle bloggers often tout headlines along the lines of "Is X good for you? Science says YES!" These articles usually cite some new study, though without going into the details of it, rarely discussing even the surface level of the methodology or significance of results.
The study measures the impact of X on some output(s) Y, and just based on the fact that it concludes with a positive impact on Y (no matter the magnitude), the pop science article reports that X is objectively good, and will end by recommending that every reader should now do X, incorporate X into your daily routine, need never feel guilty again doing X (since often X is something associated with bad habits, like drinking a particular alcohol, playing video games, etc).
Clickbaity headlines like that will easily imprint a positive association with X in most readers' minds. But just because a study found a positive impact of X on Y doesn't necessarily warrant saying that it is objectively "good for you".
Even barring general problems with many studies (was it performed on a large enough population? was the methodology correct? were there any biases introduced? was the population representative of the reader?), any study will be able to test only a limited number of outputs Y. So even if the study was performed perfectly, and you agree with the authors on what change in Y is positive or not, the study can make no claim on all other aspects that could be potentially impacted by X, including ones that we wouldn't even expect to be affected.
You will most likely benefit more from using "Lindy" things (per Nassim Taleb's terminology) than jumping on any particular pop science trend.