<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ScienceBlog.com</title>
	<atom:link href="https://scienceblog.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://scienceblog.com/</link>
	<description>Original science news reporting in Plain English</description>
	<lastBuildDate>Wed, 06 May 2026 12:22:17 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">243524309</site>	<item>
		<title>More Than a Third of Americans Have Lost Relationships Over Politics</title>
		<link>https://scienceblog.com/more-than-a-third-of-americans-have-lost-relationships-over-politics/</link>
					<comments>https://scienceblog.com/more-than-a-third-of-americans-have-lost-relationships-over-politics/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Wed, 06 May 2026 12:22:17 +0000</pubDate>
				<category><![CDATA[Brain & Behavior]]></category>
		<category><![CDATA[Social Sciences]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575990</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>Somewhere around 2016, something shifted in how Americans fight about politics. Not in the fights themselves, which have always been loud and sometimes vicious, but in what came after. People started walking away. From friends of twenty years, from family dinners, from colleagues they&#8217;d liked well enough before they learned how they voted. A new study puts a number on it: 37 percent of Americans now report having lost at least one relationship due to political differences, a proportion that researchers at the University of California, Irvine say appears to have grown substantially since the last decade began.</p>
<p>The findings, published this month in <em>PNAS Nexus</em>, are drawn from four separate datasets totalling nearly 4,000 respondents, supplemented by data from the American National Election Studies. They represent, as far as anyone can tell, the most systematic attempt yet to measure what the researchers call &#8220;political breakups&#8221;: the endpoint of polarization, when animosity stops being something people feel and becomes something they act on.</p>
<p>What makes the phenomenon worth studying, and what gives the numbers their particular weight, is the cascade of consequences that seems to follow. People who lose relationships over politics don&#8217;t simply move on; they appear to become worse at understanding their opponents. Those who reported breakups in the study overestimated the extremity of opposing views by a striking margin. Democrats who had cut ties with Republicans estimated, on average, 12.6 percentage points more of them agreed with white nationalists than their fellow Democrats without breakups had estimated (and those estimates were already off by roughly thirty points). Republicans who had broken from Democrats thought 19.2 percentage points more of them didn&#8217;t love America. Broken contact, it turns out, doesn&#8217;t neutralize hostility. It seems to amplify it.</p>
<h2>Friendships at the Front Line</h2>
<p>The study found that friendships bear the brunt. Of those who reported political breakups, 62 percent lost a friend, versus 40 percent who lost family contact and only 10 percent who ended a romantic relationship. That asymmetry is probably not accidental. Romantic partnerships and family ties come bundled with financial entanglements, shared children, holiday obligations, and social pressures that make severance genuinely costly. Friendships, the researchers note, are uniquely exposed: close enough that political differences tend to surface, but without the structural scaffolding that keeps other relationships intact through disagreement.</p>
<p>The partisan skew is one of the more politically uncomfortable findings. Democrats were substantially more likely to report having had a political breakup across all four datasets, and substantially more likely to have initiated one. In the most recent survey, 47 percent of Democrats reported losing a relationship, compared with 29 percent of Republicans. Among Democrats who had experienced breakups, 66 percent said they were the ones who ended things; among Republicans with breakups, only 27 percent said the same. The researchers are careful not to read a simple moral into this. They note that recent research suggests Democrats perceive Republicans as posing a specific kind of harm to disadvantaged groups, which may heighten the felt cost of maintaining those friendships. Whether the asymmetry reflects something stable about liberal and conservative moral psychology, or something more contingent about the Trump era specifically, they say they can&#8217;t determine from existing data.</p>
<p>The trajectory matters, too. Measuring historical trends in relationship dissolution is genuinely difficult (people forget; they reclassify; time heals some rifts before surveys catch them). But the available indicators all point the same direction. Breakups attributable to the 2024 presidential election had already, within five and a half months of the vote, exceeded the rate reported after the 2016 election at a year&#8217;s remove. Panel participants surveyed after both the 2020 and 2024 elections showed a small but statistically significant increase in family relationships damaged by political differences.</p>
<h2>A Lonelier Democracy</h2>
<p>There&#8217;s a feedback loop embedded in these findings that the researchers clearly find troubling. The existing literature on intergroup contact suggests that exposure to people with different views (real, sustained exposure, the kind that comes from actual friendship) tends to reduce partisan hostility and build political tolerance. Political breakups close exactly that channel. Someone who severs a friendship with a person from the other party loses not just a friend but a corrective; a reason to think that the people over there aren&#8217;t quite as extreme or as malevolent as the media and their own social circle might imply. What the study finds is that the experience of a breakup is associated with precisely the distorted perceptions that more contact might have prevented. People who broke up with political opponents were 13 percent more likely to attribute selfish motives to opposing voters than co-partisans who hadn&#8217;t, and showed colder feelings toward those voters by nearly eight degrees on a hundred-point thermometer. The hostility toward ordinary voters (as distinct from politicians) was notably sharper than the hostility toward candidates and elites, which is the inverse of how affective polarization usually runs.</p>
<p>The researchers acknowledge their data can&#8217;t establish causality with certainty. Cross-sectional surveys are limited in that way. It&#8217;s possible, maybe even likely, that people who were already more hostile toward the other side were more prone to cutting ties in the first place, rather than the breakup itself generating the subsequent hostility. Probably both processes are operating, the researchers suggest, feeding on each other. After a breakup, people may selectively attend to media coverage that confirms their decision was right, rationalising the cost. They may generalise from a single unpleasant acquaintance to an entire half of the electorate.</p>
<p>There is also the loneliness question. The U.S. Surgeon General declared an epidemic of loneliness in 2023, identifying weakened social ties as a major public health concern. The irony the researchers flag, quietly but pointedly, is that polarization is simultaneously eroding the social connections that protect mental and physical health, and doing so partly through this mechanism, people actively cutting off relationships, believing it necessary, while incrementally isolating themselves.</p>
<p>The researchers stop short of prescriptions. They call for longitudinal work to disentangle cause from effect, and for comparative studies across multiparty systems where the dynamics might look different. What they do say plainly is that political breakups are not simply an individual choice or a symptom of healthy self-sorting. They are a social cost. And if the trends continue, the bill keeps rising on both ends: in the health of a democracy that depends on citizens actually encountering each other across difference, and in the well-being of a population that depends on relationships to stay sane and connected.</p>
<p>The question is whether that recognition changes anything, or whether the next election just adds a few more percentage points to the count.</p>
<p>Source: <a href="https://doi.org/10.1093/pnasnexus/pgag067">Güngör &amp; Ditto, <em>PNAS Nexus</em>, 2026. doi:10.1093/pnasnexus/pgag067</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Why do political breakups seem to make people more hostile toward the other side, not less?</strong></p>
<p>Losing a relationship over politics tends to close off the very contact that helps people form accurate impressions of those they disagree with. Without that, people appear to rely more heavily on partisan media and in-group accounts, which tend to paint opponents as more extreme. The study also suggests a rationalisation process: after the social and emotional cost of ending a friendship, people may seek out information that confirms they made the right call, deepening rather than moderating their views.</p>
<p><strong>Is it true that Democrats are more likely to end friendships over politics than Republicans?</strong></p>
<p>The data consistently show that yes, across four separate datasets, Democrats were more likely to report political breakups and more likely to have been the ones who initiated them. The researchers flag this as a real asymmetry but resist simple explanations; one current line of research suggests Democrats perceive Republicans as posing particular harm to disadvantaged groups, which may raise the felt stakes of maintaining those friendships. Whether the difference is a stable feature of liberal versus conservative psychology or a response to specific political figures and events since 2016 remains an open question.</p>
<p><strong>Could repairing these lost relationships actually help reduce polarization?</strong></p>
<p>The research on intergroup contact suggests it might, though the effect is probably modest and depends heavily on the quality of contact rather than mere exposure. What the study underscores is that the inverse is also true: losing contact with people who think differently removes a natural corrective to distorted perceptions, leaving people more reliant on partisan sources and less likely to encounter views that challenge their assumptions. Rebuilding those connections, at scale, is a harder problem than identifying their absence.</p>
<p><strong>How does losing a friendship over politics affect your health?</strong></p>
<p>The paper doesn&#8217;t measure health outcomes directly, but it flags a significant overlap with the loneliness literature. Decades of research link weak social ties to elevated risks of mortality, depression, and physical illness. If political breakups are incrementally shrinking people&#8217;s social networks (and the study suggests they are, for a substantial share of the population) that has implications well beyond the political. The U.S. Surgeon General&#8217;s 2023 report on the loneliness epidemic identified exactly this kind of social erosion as a public health concern.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/more-than-a-third-of-americans-have-lost-relationships-over-politics/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575990</post-id>	</item>
		<item>
		<title>Why You&#8217;re Losing Muscle on Weight Loss Drugs, and What a Gut Hormone Might Do About It</title>
		<link>https://scienceblog.com/why-youre-losing-muscle-on-weight-loss-drugs-and-what-a-gut-hormone-might-do-about-it/</link>
					<comments>https://scienceblog.com/why-youre-losing-muscle-on-weight-loss-drugs-and-what-a-gut-hormone-might-do-about-it/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Wed, 06 May 2026 12:15:23 +0000</pubDate>
				<category><![CDATA[Health]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575987</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>Every time you swallow something fatty, your small intestine quietly releases a hormone called FGF19. It travels to your liver, taps the brakes on bile acid production, and nudges the whole digestive apparatus toward equilibrium. A background signal, modest and chemical, doing its job without fanfare. Most people have never heard of it. Researchers at the University of Michigan now think it might be one of the most important variables in the entire weight loss equation.</p>
<p>Obesity affects roughly 40 percent of American adults, and the pharmaceutical response to that statistic has been, by any measure, dramatic. GLP-1 receptor agonists like semaglutide have generated the kind of clinical enthusiasm that borders on collective relief. The drugs work. Bodies get lighter. Blood sugar improves. But something else tends to happen too, something clinicians have been watching with quiet concern: lean mass goes with the fat.</p>
<h2>The Muscle Problem Nobody Is Talking About</h2>
<p>Lean mass, mostly skeletal muscle, is not simply the stuff that makes you look toned. It is metabolically active tissue; it burns calories at rest, it underpins physical function, and its loss is, to put it bluntly, one of the main reasons people regain weight after successful treatment. If you shed fat and muscle together during rapid weight loss, you come out the other side with a body that is smaller but in some ways metabolically worse off. The weight tends to come back. Usually as fat.</p>
<p>Bozadjieva-Kramer and her colleagues had been chasing a related question for years. Earlier work from the same group suggested FGF15 (the mouse equivalent of human FGF19) protects lean mass after sleeve gastrectomy. And a separate observation in humans hinted that baseline levels of FGF19 might predict how much muscle a person loses during a very-low-energy diet. Not every patient responded the same way. Some preserved muscle well. Others didn&#8217;t. &#8220;We were interested in understanding whether the levels of FGF15/19 could broadly predict weight loss outcomes,&#8221; said Nadejda Bozadjieva-Kramer, Assistant Professor of Surgery and member of the Caswell Diabetes Institute.</p>
<p>The new study, published in the journal <em>Diabetes</em>, tested that logic more rigorously than anything her team had done before. Two groups of mice, both fattened on high-fat diets for 22 weeks. Some had normal FGF15 function; others had been engineered to lack it entirely. Then the researchers split their approach. One set of animals simply had their diets switched back to standard chow. The other set stayed on the high-fat diet but started receiving daily semaglutide injections.</p>
<h2>Two Routes Down the Same Mountain</h2>
<p>What they found draws a line between two things that are often treated as equivalent: losing weight by eating less versus losing weight by drug. Diet was more effective at clearing fat from the liver and reducing overall body weight. Semaglutide, on the other hand, produced better improvement in glucose tolerance, that measure of how gracefully the body handles blood sugar. Different paths; different metabolic consequences.</p>
<p>But the muscle story was the one that complicated everything. Mice lacking FGF15 lost significantly more lean mass when their diet changed. The hormone, it seems, is doing something protective during dietary weight loss that goes beyond its known role in bile acid regulation. Without it, muscle erodes faster. With it, the body holds on.</p>
<p>Semaglutide was less discriminating. It decreased lean mass in all the animals, regardless of whether they had FGF15. The drug simply didn&#8217;t care. Which is, in a way, a useful finding: it tells you that the gut hormone pathway relevant to dietary lean mass preservation is not the same pathway semaglutide is using. These are not redundant mechanisms. They are probably complementary, and right now we are mostly triggering one while ignoring the other.</p>
<p>&#8220;Weight loss is not a one-size-fits-all approach, and the specific treatment approach matters,&#8221; Bozadjieva-Kramer said. &#8220;It involves complex communication between the gut and liver, and understanding weight loss can help us tailor specific weight loss interventions for our patients.&#8221;</p>
<p>The bile acid data added another layer. Semaglutide modulated the composition of bile acids in ways that were particularly pronounced in mice lacking FGF15, suggesting the hormone is also involved in buffering whatever shifts the drug produces in gut chemistry. Remove FGF15 and the system responds more dramatically, less stably. It is, perhaps, a hint at why some patients on GLP-1 drugs experience gut side effects more intensely than others.</p>
<h2>Toward Combinations That Make More Sense</h2>
<p>There are real limits to what this study can tell us. The researchers acknowledge that clinical weight loss is most effective when GLP-1 drugs are combined with diet and exercise, a combination this study did not examine. Mouse models have their own particular metabolism; lean mass in a rodent is not a perfect proxy for lean mass in a person carrying two jobs and three kids. The leap from knockout mice to a clinical decision is still a long one.</p>
<p>But the underlying question is already in the room. If FGF19 levels in the blood can predict, before treatment even starts, how well a patient will hold on to muscle during weight loss, that is clinically useful. It is the kind of biomarker that could, eventually, help a physician say: this patient would benefit from a dietary approach; that one would do better with semaglutide; perhaps this third patient needs a combination we have not yet optimised. The team is now working to understand how to combine dietary and pharmacological strategies to maximise metabolic benefits while minimising lean mass loss. The gut, it turns out, is not just digesting your food. It is negotiating your future.</p>
<p>DOI: <a href="https://doi.org/10.2337/db25-0466">10.2337/db25-0466</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Why do weight loss drugs like semaglutide cause muscle loss?</strong></p>
<p>Semaglutide reduces appetite and body weight, but the body doesn&#8217;t distinguish cleanly between fat and muscle when shedding mass rapidly. Skeletal muscle is metabolically costly to maintain, so the body tends to cannibalize it alongside fat stores during aggressive caloric restriction. The Michigan study found this happened regardless of whether the gut hormone FGF15 was present, suggesting semaglutide acts through a different biological pathway than the ones that normally protect muscle during dietary weight loss.</p>
<p><strong>Does losing muscle during weight loss make it harder to keep the weight off?</strong></p>
<p>Probably, yes. Skeletal muscle burns calories at rest, so losing it reduces your resting metabolic rate, meaning your body needs fewer calories to function. If you then return to your previous eating habits, or even a modest diet, you are more likely to regain weight as fat rather than muscle. This is one reason why preserving lean mass is increasingly seen as a central goal of obesity treatment, not a cosmetic afterthought.</p>
<p><strong>Could measuring FGF19 levels before treatment help doctors choose the right approach?</strong></p>
<p>That is the working hypothesis this research is building toward. Earlier work from the same Michigan group showed that baseline FGF19 levels in humans can predict how much lean mass a person loses during a very-low-energy diet. If that predictive relationship holds up in larger clinical studies, a simple blood test before starting a weight loss programme could in principle guide treatment choices, though significant validation work remains.</p>
<p><strong>Is diet still better than semaglutide for weight loss?</strong></p>
<p>The study found that switching back to a standard diet was more effective at reducing liver fat and overall body weight, while semaglutide produced greater improvement in blood sugar control. Neither approach is straightforwardly better; they produce different metabolic outcomes and likely suit different patients. The real clinical question, which this study did not address, is how the two approaches work in combination with exercise, which remains the gold standard recommendation.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/why-youre-losing-muscle-on-weight-loss-drugs-and-what-a-gut-hormone-might-do-about-it/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575987</post-id>	</item>
		<item>
		<title>Running, Lifting, or Both: Why Combining Exercise Types Cuts Heart Risk by Nearly Half</title>
		<link>https://scienceblog.com/running-lifting-or-both-why-combining-exercise-types-cuts-heart-risk-by-nearly-half/</link>
					<comments>https://scienceblog.com/running-lifting-or-both-why-combining-exercise-types-cuts-heart-risk-by-nearly-half/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Wed, 06 May 2026 12:10:12 +0000</pubDate>
				<category><![CDATA[Brain & Behavior]]></category>
		<category><![CDATA[Health]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575984</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>Two curves. One shaped like the letter L, the other like the letter J. They look almost identical at first glance, but for anyone trying to understand how exercise protects the heart, the difference between them is rather significant. The L-curve describes what aerobic exercise does to cardiovascular risk: dramatic early gains for people who go from nothing to something, then a long, shallow tail of diminishing returns as activity increases. The J-curve describes what happens with weight training. Risk drops sharply at first, then, beyond roughly an hour a week, starts creeping back up. Maybe. The evidence is thinner there, and researchers are careful to say so.</p>
<p>This, in rough outline, is one of the central findings from a comprehensive review by Fangchao Liu and colleagues at the National Center for Cardiovascular Diseases in Beijing, published this spring in <em>Medicine Plus</em>. The team set out to do something the field had been slow to do properly: compare aerobic exercise, muscle-strengthening exercise, and combinations of both, not just for their effects on heart disease outcomes overall, but on each of the major risk factors along the way.</p>
<p>The aerobic side of the picture is well-established, if still producing surprises. The dose-response relationship follows that L-shape, which has a rather hopeful implication: the people who gain most from exercise are those who are currently doing nothing at all. Going from 2,000 daily steps to 5,000 is, by some estimates, associated with a 45 percent reduction in cardiovascular mortality risk. Climbing to 10,000 steps delivers no additional benefit on top of that. The marginal return on your tenth kilometre of running is considerably smaller than on your first. What is perhaps less intuitive is that even extremely brief bouts of activity seem to count. One study cited in the review found that just five to ten minutes of moderate-to-vigorous aerobic activity per day was associated with a 41 percent reduction in major adverse cardiovascular events, compared to people who were sedentary. The biology, it turns out, doesn&#8217;t demand hour-long sessions.</p>
<h2>When You Exercise May Matter As Much As How Much</h2>
<p>Timing has emerged as an unexpectedly interesting variable. An analysis of accelerometer data from nearly 30,000 UK Biobank participants found that people who exercised in the evening showed a 36 percent lower risk of cardiovascular death compared to their inactive counterparts, versus 17 percent for morning exercisers and 16 percent for afternoon exercisers. Whether that reflects something real about circadian biology or simply a quirk of who exercises in the evening (younger, perhaps, or less sedentary overall) is not yet settled. Still, the finding is striking enough that Liu&#8217;s team highlighted it specifically. There&#8217;s also encouraging news for the so-called &#8220;weekend warrior&#8221; pattern, where people cram a week&#8217;s worth of activity into one or two days. A separate study found this approach delivers roughly the same cardiovascular protection as evenly distributed activity, which is, practically speaking, quite useful to know.</p>
<p>The muscle-strengthening findings are where things get more complicated, and more contested. The J-shape of the dose-response curve suggests an optimal zone somewhere between 40 and 60 minutes of resistance training per week, with a putative safe upper limit around 130 to 140 minutes. Go beyond that, and the data hint that cardiovascular risk may begin to climb back toward levels seen in inactive people. The review is candid about how preliminary this is. Most of the evidence relies on self-reported questionnaires, which are notoriously unreliable for capturing resistance exercise (how hard did you actually lift? for how long? how often?). Accelerometers, which have transformed aerobic exercise research, still struggle to distinguish a squat from a slow walk. The J-curve, in other words, might be an artefact of measurement problems rather than real biology.</p>
<p>What&#8217;s less contested is that even small amounts of resistance training provide meaningful benefit. A large cohort study found that one to 59 minutes of weight training per week, the lowest dosage category examined, was associated with a 40 to 70 percent lower risk of total cardiovascular events compared to doing none, independent of how much aerobic exercise people were also doing. That independence matters. The two types of exercise appear to be protecting the heart through at least partly different mechanisms.</p>
<h2>The Case for Doing Both</h2>
<p>This brings the review to its most practically significant finding. When researchers compared inactive people to those doing only aerobic exercise, only resistance training, or both, the combined group came out substantially ahead. Aerobic-only participants showed a 29 percent lower risk of cardiovascular mortality. Resistance-only, 18 percent. People doing both: 46 percent. The synergy isn&#8217;t just additive; the two modalities appear to amplify each other&#8217;s effects. Aerobic exercise increases stroke volume and improves how efficiently vessels dilate; resistance training strengthens the muscle architecture of the heart itself and improves the microcirculation in peripheral muscle tissue. At the molecular level, both types of exercise trigger cascades of signalling molecules that reduce chronic inflammation, improve mitochondrial function, and help regulate blood pressure through different pathways. Running and lifting, in this sense, are not interchangeable. They are complementary tools.</p>
<p>For blood pressure specifically, the review synthesises data from a remarkable meta-analysis of 270 randomised controlled trials. Aerobic exercise alone reduced resting systolic blood pressure by around 4.5 mmHg. Resistance training alone achieved similar reductions. But combined activity brought the systolic figure down by roughly 6 mmHg. For people with hypertension, those numbers translate to meaningful reductions in stroke risk. The review notes, with some understatement, that aerobic exercise&#8217;s antihypertensive effect is sometimes comparable to pharmacological therapy, and occasionally superior. Which is a fairly remarkable thing to say about going for a walk.</p>
<p>The picture for high-risk populations is more nuanced. People with existing cardiovascular risk factors, including diabetes, hypertension, and obesity, tend to see more pronounced benefits from physical activity than healthy populations. The biology is working harder to compensate for existing dysfunction, and exercise provides more leverage. But the flip side is that these same people face greater risk from exercise-related adverse events if they push too hard too fast. The review is explicit that personalised prescription matters here, and that blanket guidance is unlikely to serve high-risk individuals well.</p>
<p>Where the field is heading is fairly clear. Wearable technology is generating objective data on activity patterns at a scale that self-report questionnaires could never match, and machine learning is beginning to extract meaningful signal from the noise. The ambition is personalised exercise regimens that adjust in real time based on individual physiology and health trajectory. Whether that vision survives contact with real human lives is another question. For now, the two curves offer a starting point. Get moving. Then pick up something heavy.</p>
<p>Source: <a href="https://doi.org/10.1016/j.medp.2026.100137">Zhou T, et al. &#8220;Cardiovascular health benefits of physical activity: aerobic, muscle-strengthening, and combined exercise modalities.&#8221; <em>Medicine Plus</em> (2026). DOI: 10.1016/j.medp.2026.100137</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Is it true that too much weightlifting can be bad for your heart?</strong></p>
<p>Possibly, though the evidence is preliminary and comes with important caveats. Research suggests a J-shaped relationship between resistance training and cardiovascular risk, with an apparent sweet spot around 40 to 60 minutes per week and a possible upper safety limit near 130 minutes. Beyond that threshold, some data hint that risk may begin to rise. But most of this evidence relies on self-reported questionnaires, which are unreliable for capturing resistance exercise intensity and volume, so the J-curve may partly reflect measurement error rather than true biological risk.</p>
<p><strong>Why does combining cardio and weights protect the heart more than either one alone?</strong></p>
<p>The two types of exercise protect the heart through genuinely different mechanisms, and those mechanisms appear to complement rather than simply add to each other. Aerobic exercise improves how efficiently the heart pumps blood and how well blood vessels dilate; resistance training strengthens the heart muscle&#8217;s structural architecture and improves blood flow in peripheral muscles. At the molecular level, both trigger different signalling pathways that reduce inflammation, regulate blood pressure, and improve mitochondrial function. Doing both, in effect, covers more biological ground.</p>
<p><strong>Does it actually matter what time of day you exercise?</strong></p>
<p>Emerging data suggest it might, though researchers are cautious about drawing firm conclusions. An analysis of nearly 30,000 UK participants found that evening exercisers showed substantially greater reductions in cardiovascular mortality risk than morning or afternoon exercisers. Whether this reflects circadian biology, differences in exercise intensity, or characteristics of who tends to exercise in the evening is not yet clear. The finding is intriguing enough to warrant further study, but it shouldn&#8217;t override the more fundamental point: any exercise, at any time, is better than none.</p>
<p><strong>What if I can only exercise on weekends?</strong></p>
<p>The evidence suggests that concentrating your weekly activity into one or two days delivers cardiovascular protection comparable to spreading it evenly across the week. This so-called &#8220;weekend warrior&#8221; pattern has been specifically examined in large cohort studies and found to significantly reduce cardiovascular risk relative to being inactive. So if weekday constraints make regular exercise impossible, banking your minutes at the weekend still counts.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/running-lifting-or-both-why-combining-exercise-types-cuts-heart-risk-by-nearly-half/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575984</post-id>	</item>
		<item>
		<title>Super Shoes Make Runners Faster. They May Also Be Slowly Breaking Their Bones.</title>
		<link>https://scienceblog.com/super-shoes-make-runners-faster-they-may-also-be-slowly-breaking-their-bones/</link>
					<comments>https://scienceblog.com/super-shoes-make-runners-faster-they-may-also-be-slowly-breaking-their-bones/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Wed, 06 May 2026 12:08:39 +0000</pubDate>
				<category><![CDATA[Health]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575981</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>The carbon fiber plate sits roughly 30 millimeters deep inside the sole, sandwiched in a foam so springy it seems almost alive. Slip on a pair of advanced footwear technology shoes, lace them up, and something odd happens: you feel faster before you&#8217;ve taken a single step.</p>
<p>Since Nike&#8217;s Vaporfly first crossed a finish line in 2016, the running world has been transfixed by these so-called super shoes, and for obvious reasons. They shave real minutes off real times. At the elite level, where careers turn on seconds, that is not a trivial thing. But a study published this spring in <em>PM&amp;R</em> raises a question the sport has been mostly reluctant to ask: what, exactly, are these shoes doing to the bodies inside them?</p>
<p>The answer is complicated, probably inconvenient, and may matter quite a lot for the millions of recreational runners who have adopted the technology alongside the professionals who made it famous.</p>
<p>Researchers at Mass General Brigham recruited 23 healthy elite distance runners (11 women, 12 men, average age around 25) and put them through a biomechanical assessment across three shoe types: a standard neutral shoe, a lightweight responsive foam model, and a full advanced footwear technology shoe fitted with both highly cushioned foam and the now-ubiquitous stiff embedded plate. Each runner wore all three pairs, in randomized order, at three different paces: an easy training trot, a tempo effort, and something approaching 5-kilometer race speed. The researchers measured seven biomechanical variables known to predict bone stress injuries, those overuse insults that start as swelling in the bone and can deepen into stress fractures serious enough to end a season.</p>
<p>What came back was, at first glance, a mixed picture. In some respects the super shoe behaved itself.</p>
<p>In others, it did not. Cadence, the number of steps per minute, dropped significantly in the advanced footwear technology shoe compared to both alternatives. Fewer steps per minute means each stride covers more ground, which sounds efficient, but biomechanists tend to wince at it: overstriding loads the lower limb differently, and not in ways bones generally appreciate. Meanwhile the foot&#8217;s arch was collapsing inward more in the super shoe than in the neutral model, a motion called rearfoot eversion excursion that has its own unflattering relationship with bone stress injury. Both of those changes were small, the researchers are careful to note. Small, though, is relative. When you log a hundred miles a week, small cumulative stresses have a way of arriving as big cumulative problems.</p>
<p>&#8220;AFT improves performance, but runners should balance this benefit with the possibility of subtle changes in loading on the body,&#8221; said Michelle Bruneau, the study&#8217;s lead author and a postdoctoral research fellow at Spaulding Rehabilitation. &#8220;Rotating shoes and gradually adapting to AFT may help reduce potential injury risk while optimizing running performance.&#8221;</p>
<p>There was, interestingly, one genuinely protective-looking signal buried in the data. The super shoe substantially reduced the demand placed on the ankle during push-off (the plantarflexion moment, in the jargon), which is noteworthy because the Achilles tendon and its neighboring structures take a considerable beating during that phase of the gait cycle. So the shoe is not simply a villain. It is, perhaps, more like a complicated character: redistributing load rather than eliminating it, moving stress away from some tissues and, possibly, concentrating it elsewhere. Whether that redistribution is a net benefit depends on which particular tissues you happen to be asking about and, presumably, on which ones you can least afford to damage.</p>
<h2>What the Body Is Actually Doing Inside a Super Shoe</h2>
<p>Bone stress injuries are not really bone injuries in the traditional sense, or at least they don&#8217;t start that way. They emerge from a mismatch between mechanical loading and the bone&#8217;s capacity to remodel; push the bone faster than it can adapt and you get microdamage accumulating ahead of repair. The biomechanical variables the Mass General team chose to measure were all selected precisely because they have shown up, in prior research, as flags for that kind of accumulated stress. Rearfoot eversion, for instance, alters the twisting forces on the tibia. Lower cadence shifts peak force distribution. These are not exotic laboratory abstractions; they are the mechanical signatures of injuries that sideline runners at every level of the sport.</p>
<p>What makes the super shoe findings harder to parse is that the shoes were not designed with any of this in mind. They were engineered for performance, full stop. The carbon plate&#8217;s job is to store and return energy; the cushioning foam&#8217;s job is to smooth impact. The fact that they appear to alter cadence and eversion in ways that might accumulate into injury risk is almost certainly an unintended side effect, the biomechanical equivalent of a drug with a useful primary action and a list of small print worth reading.</p>
<h2>Should Elite Runners Be Worried?</h2>
<p>The study had real limitations. Twenty-three runners is a small cohort, the assessment was cross-sectional (meaning each participant was measured once, not tracked over months of training), and the researchers could observe changed mechanics without being able to say whether those mechanics had actually injured anyone. That last gap is significant. A changed gait pattern is a risk factor, not a diagnosis.</p>
<p>Still, there is something worth sitting with here. The advanced footwear market has expanded with remarkable speed; these shoes are no longer worn only by Olympic hopefuls. Amateur runners shell out two, three, sometimes four hundred dollars for them, train in them daily, and have access to rather less physiological guidance than the elite athletes who were, in some sense, the original test population. Adam Tenforde, the study&#8217;s senior author and director of Running Medicine at Mass General Brigham, put it carefully: &#8220;Our study highlights the need for careful integration of AFT into training and underscores the importance of further research to better understand long-term strategies to modify risk for injury while recognizing the exciting gains related to this footwear on performance.&#8221;</p>
<p>Careful integration. It&#8217;s a phrase that probably deserves more attention than it&#8217;s getting in the general frenzy around these shoes. The biomechanical changes the researchers documented were small enough that they might, for some runners, matter very little. For others, particularly those already close to their bone stress injury threshold through high training loads or dietary factors or simply bad luck with geometry, the cumulative effect could tip the balance in an uncomfortable direction. Running medicine specialists increasingly recommend rotating between shoe types rather than training exclusively in advanced footwear, giving the body time to adapt across different loading patterns instead of settling into whatever the carbon plate is quietly teaching it to do.</p>
<p>The super shoe is not going anywhere. It is too fast, too lucrative, and (for many runners) too genuinely pleasurable to abandon on the basis of small biomechanical shifts that may or may not translate into injury. But sports science is in the early stages of understanding exactly what these shoes are doing beneath the surface, and the answers are likely to be more complicated than the marketing suggests. The next decade of running medicine research may well look back at the super shoe era the way cardiology now looks at certain miracle drugs from the 1990s: grateful for the gains, clearer-eyed about what was silently accumulating all along.</p>
<hr />
<p>Source: Bruneau MM et al. &#8220;Biomechanics associated with bone stress injuries while using advanced footwear technology in elite distance runners&#8221; <em>PM&amp;R</em> (2026). <a href="https://doi.org/10.1002/pmrj.70153">https://doi.org/10.1002/pmrj.70153</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Do super shoes actually cause bone stress fractures?</strong></p>
<p>The research doesn&#8217;t show that directly, at least not yet. What the Mass General Brigham study found is that running in advanced footwear technology shoes produces subtle changes in gait mechanics, specifically lower cadence and greater inward arch collapse, that are known risk factors for bone stress injuries. Whether those small biomechanical shifts translate into actual fractures over months of training is the question the next wave of research will need to answer.</p>
<p><strong>Why would a shoe designed to protect runners actually stress their bones?</strong></p>
<p>The advanced cushioning foam and carbon fiber plate in super shoes were engineered to return energy and smooth impact, not to preserve any particular gait pattern. The biomechanical changes appear to be unintended side effects of that energy-return design: the shoe subtly encourages longer strides and alters how the foot lands, which shifts mechanical loading across the lower limb in ways that bone doesn&#8217;t always welcome over high training volumes.</p>
<p><strong>Is it safer to train in regular shoes and only race in super shoes?</strong></p>
<p>That&#8217;s essentially what the researchers are suggesting, though they stop short of a firm prescription. Rotating between shoe types means the body doesn&#8217;t fully adapt to the gait mechanics any single shoe promotes, which may spread the mechanical load more broadly. Elite runners who train in neutral shoes and reserve advanced footwear for race day are probably getting most of the performance benefit with less accumulated biomechanical risk, though the data to confirm this definitively don&#8217;t yet exist.</p>
<p><strong>Could the protective ankle effect in super shoes offset the other injury risks?</strong></p>
<p>Possibly, for some runners. The study found that the super shoe significantly reduced the load on the ankle during push-off, which is good news for the Achilles tendon and surrounding structures. The problem is that the shoe appears to redistribute stress rather than eliminate it, moving demand away from the ankle while potentially concentrating it elsewhere in the lower limb. Whether that trade is favorable depends on the individual runner&#8217;s injury history and structural vulnerabilities.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/super-shoes-make-runners-faster-they-may-also-be-slowly-breaking-their-bones/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575981</post-id>	</item>
		<item>
		<title>A Single Dose of Psilocybin Leaves Lasting Marks on the Human Brain</title>
		<link>https://scienceblog.com/a-single-dose-of-psilocybin-leaves-lasting-marks-on-the-human-brain/</link>
					<comments>https://scienceblog.com/a-single-dose-of-psilocybin-leaves-lasting-marks-on-the-human-brain/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Wed, 06 May 2026 12:03:29 +0000</pubDate>
				<category><![CDATA[Brain & Behavior]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575979</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>Inside a brain on psilocybin, something unusual happens to the signal. The neurons keep firing, more or less as before, but the statistical pattern of that firing changes in a way that information theorists would recognise instantly: the data stream becomes harder to compress. Less predictable. Richer, in a technical sense, with information. This quality, which researchers call brain entropy, turns out to matter quite a lot, not just for what happens during the six or so hours a mushroom compound spends in the bloodstream, but for what happens to a person&#8217;s mind and white matter in the weeks that follow.</p>
<p>A study published this week in Nature Communications offers the most detailed look yet at what a single high dose of psilocybin does to the human brain, from the first hour of the experience through to a month afterward. The results are striking enough that even the researchers appear slightly surprised by some of them.</p>
<p>The work, led by Robin Carhart-Harris at UC San Francisco alongside colleagues at Imperial College London, recruited 28 healthy adults who had never taken a psychedelic in their lives. Each participant received two doses of psilocybin, a month apart: first 1 milligram, a sub-threshold amount that works as a functional placebo, and then 25 milligrams, a dose capable of producing a full psychedelic experience. Brain imaging came in three waves: before any dosing, a month after the placebo, and a month after the high dose. During each of the two dosing sessions, electrodes on the scalp recorded brain activity in real time.</p>
<h2>The Brain Becomes Harder to Read</h2>
<p>Within 60 minutes of swallowing the 25 milligram capsule, something measurable had already begun. The EEG traces showed a sharp rise in what the team measures as Lempel-Ziv complexity, essentially a calculation of how hard it is to summarise a stretch of neural signal without losing information. Alpha waves, which normally keep a kind of rhythmic order across the cortex, dropped substantially. Gamma activity climbed. The brain, in the language of information theory, had become noisier. More complex. Possibly more open.</p>
<p>This entropic quality peaked at around two hours post-dose, coinciding with the most intense phase of the experience. Twenty-seven of the 28 participants rated what they went through as the single most unusual state of consciousness of their entire lives; the remaining person placed it in their top five. No one who received the 1 milligram dose reported anything remotely similar.</p>
<p>What the team then found was a chain of prediction running forward through time. The degree to which a participant&#8217;s brain entropy had spiked during the session correlated with how much psychological insight they reported the following day, a quality assessed via a validated scale probing self-awareness and behavioural shifts. That insight, in turn, predicted improvements in well-being scores at the one-month mark. &#8220;Psychedelic means &#8216;psyche-revealing,&#8217; or making the psyche visible,&#8221; said Carhart-Harris. &#8220;Our data shows that such experiences of psychological insight relate to an entropic quality of brain activity and how both are involved in causing subsequent improvements in mental health. It suggests that the trip, and its correlates in the brain, is a key component of how psychedelic therapy works.&#8221;</p>
<h2>White Matter, Unexpectedly Changed</h2>
<p>The stranger finding, and the one the authors are most cautious about, comes from diffusion tensor imaging, a technique that tracks how water molecules move along the brain&#8217;s white matter tracts. One month after the 25 milligram dose, participants showed decreased axial diffusivity in two bilateral pathways connecting the prefrontal cortex to subcortical structures: one running to the striatum, one to the thalamus. The same measurement taken a month after the placebo showed nothing of the sort.</p>
<p>Axial diffusivity measures how freely water diffuses along the principal axis of a nerve fibre. When it decreases, the tract has in some sense become denser, more organised, or structurally altered in a way that constrains that diffusion. Similar decreases have been observed after intensive meditation practice and, interestingly, after rapid learning. What they signify at the cellular level remains genuinely unclear; the change could reflect dendritic growth, altered myelination, shifts in axon density, or changes in extracellular fluid, among other possibilities. The authors flag all of this explicitly. Without replication in a larger study using more advanced imaging sequences, drawing firm conclusions about microstructural neuroplasticity would be premature.</p>
<p>Still, the finding rhymes with preclinical work. Studies in mice have shown psilocybin promotes rapid growth of dendritic spines in frontal cortex; pig studies have found increased synaptic density after a single dose. A possible anatomical correlate in living humans, however provisional, is new.</p>
<p>The DTI changes also correlated with shifts in brain network modularity, the degree to which the brain&#8217;s functional regions sort themselves into distinct, segregated clusters. After the high dose, modularity tended to decrease, meaning the brain&#8217;s networks became somewhat more globally integrated, less siloed. Participants whose modularity fell the most tended to show the greatest improvements in well-being. This same relationship has appeared in previous psilocybin trials targeting depression, though in those studies the effect was more robust; the researchers suggest that healthy brains, lacking the extreme network rigidity seen in depression, may simply have less room to shift.</p>
<h2>What the Trip Is Actually For</h2>
<p>Perhaps the most clinically consequential part of the analysis is the mediation model. The team tested whether the link between acute brain entropy and one-month well-being ran directly, or whether it was carried through psychological insight. Inserting insight as a mediating variable strengthened the model considerably. In plain terms: the psychedelic experience produces a specific quality of brain activity; that activity fosters self-reflection; and the self-reflection drives lasting change. The experience itself, not merely the pharmacology, appears to matter.</p>
<p>&#8220;Psilocybin seems to loosen up stereotyped patterns of brain activity and give people the ability to revise entrenched patterns of thought,&#8221; said Taylor Lyons, the paper&#8217;s first author. &#8220;The fact that these changes track with insight and improved well-being is especially exciting.&#8221;</p>
<p>For the emerging field of psychedelic medicine, this is arguably reassuring. It means that the therapeutic effect is not simply a molecular one that could be replicated by a drug lacking the subjective experience; it&#8217;s bound up with what happens to a person&#8217;s mind during the hours the compound is active. The implication, which the team is careful not to overstate, is that optimising the conditions for insight during a session could improve outcomes, and that the EEG signature of brain entropy might eventually serve as a real-time indicator of whether the dose and setting are producing the kind of neural state associated with therapeutic benefit. There is, of course, still a long way to go. But the chain of evidence, from entropy to insight to well-being to possible anatomical change, is getting harder to dismiss as coincidence.</p>
<p>Source: <a href="https://doi.org/10.1038/s41467-026-71962-3">Lyons et al., Nature Communications (2026). doi:10.1038/s41467-026-71962-3</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Does the psychedelic experience itself matter for psilocybin&#8217;s therapeutic effects, or is it just the drug?</strong></p>
<p>The experience appears to matter considerably. This study found that the degree of brain entropy during the trip predicted how much psychological insight people felt the next day, and that insight, in turn, predicted well-being improvements a month later. Stripping out the subjective experience from the pharmacology may therefore reduce therapeutic benefit, which has significant implications for how psilocybin-based treatments are designed and administered.</p>
<p><strong>Is psilocybin actually changing the physical structure of the brain?</strong></p>
<p>Possibly, though with important caveats. One month after a high dose, participants showed changes in white matter tracts connecting the prefrontal cortex to deeper brain structures, a pattern not seen after the placebo. The researchers caution that the specific biological cause of this change is unclear and that the finding needs replication in larger studies before any firm conclusions can be drawn. Similar patterns have appeared after meditation and intensive learning, so the change, if real, is not necessarily alarming.</p>
<p><strong>Could measuring brain entropy during a session help doctors predict who will benefit?</strong></p>
<p>That is one of the study&#8217;s more intriguing implications. Participants with the largest spikes in brain entropy were also the most likely to report insight and improved well-being weeks later. If that relationship holds in clinical populations, EEG-measured entropy could eventually function as a real-time indicator during therapy sessions, helping clinicians assess whether conditions are right for a beneficial experience. The idea remains speculative for now, but it points toward a more personalised approach to psychedelic medicine.</p>
<p><strong>Why did the researchers use healthy volunteers rather than people with depression or anxiety?</strong></p>
<p>Working with healthy participants gave the team greater freedom to conduct intensive imaging and testing without the ethical constraints that come with treating a vulnerable clinical population. It also allowed them to establish a cleaner baseline and rule out effects driven by the illness itself rather than the drug. The tradeoff is that brain changes tended to be more modest than in previous psilocybin-for-depression trials, which the researchers suggest may reflect the fact that healthy brains have less rigid network organisation to begin with.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/a-single-dose-of-psilocybin-leaves-lasting-marks-on-the-human-brain/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575979</post-id>	</item>
		<item>
		<title>Sensitive Skin Syndrome and Rosacea Are Not the Same Condition, and the Difference Is Written in Your Proteins</title>
		<link>https://scienceblog.com/sensitive-skin-syndrome-and-rosacea-are-not-the-same-condition-and-the-difference-is-written-in-your-proteins/</link>
					<comments>https://scienceblog.com/sensitive-skin-syndrome-and-rosacea-are-not-the-same-condition-and-the-difference-is-written-in-your-proteins/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Wed, 06 May 2026 12:01:00 +0000</pubDate>
				<category><![CDATA[Health]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575976</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>Your skin burns when you splash cold water on your face. It stings after the first sip of wine. It flushes in the sun, tightens in the wind, aches inexplicably after stress. For years, doctors have looked at that constellation of symptoms and reached for a familiar diagnosis: rosacea, or something near enough to it that the distinction barely seemed worth making.</p>
<p>That assumption, it turns out, has probably sent a lot of patients down the wrong treatment path.</p>
<p>A new study from George Washington University, published in the Journal of the American Academy of Dermatology, has done something surprisingly rare in dermatology: it has looked directly at the biology of so-called sensitive skin syndrome and found that it operates through entirely different mechanisms than rosacea. Not a milder version of the same disease. Not a precursor. Something genuinely separate, with its own distinct molecular signature: the absence of one that rosacea has in abundance.</p>
<p>The overlap in symptoms is real. Both conditions produce facial redness, burning, stinging, and reactivity to external triggers. Both flare with UV exposure, with temperature shifts, with stress.</p>
<h2>What Rosacea Actually Does to the Skin</h2>
<p>Rosacea&#8217;s biology, though, is well-mapped. The condition is associated with overgrowth of <em>Demodex folliculorum</em>, a microscopic mite that lives in the hair follicles of most human faces in small numbers. In rosacea, Demodex populations swell, and the skin&#8217;s immune response follows: antimicrobial peptides flood the tissue, specifically cathelicidin and dermcidin, proteins that drive inflammation, promote blood vessel growth, and keep the immune system in a state of chronic low-level alarm. The working assumption had long been that sensitive skin syndrome involved some version of the same process, perhaps just dialled down slightly.</p>
<p>The GW team decided to test that directly. Thirty women between the ages of 30 and 50 were recruited, half with sensitive skin syndrome and half without. The researchers used reflectance confocal microscopy, an imaging technique that produces high-resolution cross-sections of living skin without a biopsy, to look for Demodex mites in the follicles of each participant&#8217;s cheek. They then swabbed the skin and used mass spectrometry to measure the actual concentrations of cathelicidin and dermcidin circulating at the skin surface.</p>
<p>The mite counts came back identical. Twenty percent of participants in both the sensitive skin group and the control group had Demodex present, a proportion statistically indistinguishable. The peptide findings were more striking still: cathelicidin levels were significantly lower in the sensitive skin group than in controls, not elevated. Dermcidin showed the same pattern, running at roughly half the concentration seen in ordinary skin. The inflammatory machinery so central to rosacea was, in these patients, quieter than normal.</p>
<p>Not overactive. Quieter.</p>
<p>&#8220;These findings further support our ongoing work that sensitive skin syndrome is a unique skin condition, not simply a milder form of rosacea,&#8221; said Adam Friedman, professor and chair of dermatology at GW and senior author of the study.</p>
<h2>A Different Kind of Skin Problem Entirely</h2>
<p>The suppressed peptide levels hint at a different underlying problem. Sensitive skin syndrome is increasingly understood to involve a compromised skin barrier and dysregulated neurosensory signalling: nerves that fire too easily, a stratum corneum that fails to buffer properly against the world outside. It is a condition of hypersensitivity without hyperinflammation. The skin feels too much, but the proteins typically marshalled to fight infection and regulate immune tone are paradoxically depleted. Why they should be reduced remains an open question. It may be that the barrier dysfunction characteristic of the condition changes the local protein environment, or that chronic low-grade neural signalling suppresses certain immune pathways rather than activating them. The mechanisms are still being unpicked.</p>
<p>What is clearer is what this means in the clinic. Many rosacea treatments target precisely the peptide pathways and mite burden that this study suggests are not relevant to sensitive skin syndrome. Patients who have been prescribed those treatments without improvement may have been accurately diagnosed with the wrong condition entirely. &#8220;This distinction matters because it can help clinicians avoid treatments that may not benefit sensitive skin patients and instead focus on over the counter and prescription therapies better aligned with the biology of the condition,&#8221; Friedman said.</p>
<p>The study is small, thirty participants, and the authors are candid about its limitations: participants were not standardised for skincare product use, and pilot data of this kind requires replication at scale before it can change clinical guidelines. But the direction of travel is significant. For the millions of people who have spent years being misunderstood by their own skin, the prospect of a diagnosis with its own biological logic, and eventually its own targeted treatments, is rather a different kind of news.</p>
<p>Source: <a href="https://doi.org/10.1016/j.jaad.2026.04.1986">Menta et al., <em>Journal of the American Academy of Dermatology</em> (2026). doi:10.1016/j.jaad.2026.04.1986</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>If sensitive skin syndrome and rosacea feel the same, why does the biological difference matter?</strong></p>
<p>Because the biology determines what treatments actually work. Rosacea is driven by mite overgrowth and overactive immune proteins, so drugs targeting those mechanisms make sense for rosacea patients. For sensitive skin syndrome, the same proteins are actually lower than normal, which means those treatments are unlikely to help and could potentially worsen symptoms by targeting pathways that are already suppressed.</p>
<p><strong>Is sensitive skin syndrome a real medical condition or just a description of symptoms?</strong></p>
<p>That question is precisely what researchers are trying to settle. For a long time, sensitive skin was treated as a vague symptom cluster rather than a diagnosis in its own right. Growing evidence, including this study&#8217;s finding of a distinct molecular profile, is pushing the field toward recognising it as a standalone condition with its own pathophysiology, though larger studies are still needed to cement that classification.</p>
<p><strong>Could someone have both sensitive skin syndrome and rosacea at the same time?</strong></p>
<p>The study does not directly address co-occurrence, but the findings suggest the two conditions have largely separate biological drivers. Whether they can overlap in a single patient, and how clinicians would distinguish or treat such a case, is an open question that this research does not yet answer. It is one reason the authors call for larger studies with more detailed patient profiling.</p>
<p><strong>Why were Demodex mites found on some people with sensitive skin if mites are not driving the condition?</strong></p>
<p>Demodex mites live on virtually everyone&#8217;s skin in low numbers and are generally considered harmless commensals. The key finding was not that sensitive skin patients had zero mites, but that their mite levels were no higher than those of people with ordinary skin. In rosacea, mite density is markedly elevated; in sensitive skin syndrome, it appears to be entirely typical, which rules out mite overgrowth as a causal factor.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/sensitive-skin-syndrome-and-rosacea-are-not-the-same-condition-and-the-difference-is-written-in-your-proteins/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575976</post-id>	</item>
		<item>
		<title>Sticky Tape Stores Memories Like a Combination Lock, Without Electricity</title>
		<link>https://scienceblog.com/sticky-tape-stores-memories-like-a-combination-lock-without-electricity/</link>
					<comments>https://scienceblog.com/sticky-tape-stores-memories-like-a-combination-lock-without-electricity/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Wed, 06 May 2026 11:55:46 +0000</pubDate>
				<category><![CDATA[Physics & Mathematics]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575973</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>Pull a strip of ordinary Scotch tape partway off a surface and set it back down. Nothing remarkable seems to happen. The tape lies flat, a little crinkled perhaps, the adhesive re-bonding to whatever substrate you peeled it from. But look closely, at the molecular level of the adhesive layer, and something has been written there: a record. A memory of exactly how far you pulled.</p>
<p>Peel it again, a shorter distance this time, and a second memory joins the first. Do it a third time, shorter still. The tape now holds three distinct records, nested inside each other like Russian dolls, readable in sequence by peeling the tape back past each stored stopping point and measuring the small spikes in the force required. Nathan Keim&#8217;s lab at Penn State has spent the last few years working out why.</p>
<h2>What the Tape Remembers</h2>
<p>The physics of what&#8217;s happening is, in principle, simple enough. &#8220;Ordinary tape is pressure sensitive,&#8221; says Sebanti Chattopadhyay, a postdoctoral scholar in physics and first author on the paper, published in the New Journal of Physics. &#8220;The harder you press it down, the more firmly it adheres to a surface.&#8221; When you peel the tape partway, the mechanics of peeling itself create a zone of intense compression just ahead of the peeling front, a consequence of the elastic stiffness of the tape&#8217;s plastic backing applying a torque to the adhesive layer as it lifts away. That compressed zone fuses the tape to the substrate more strongly than anywhere else. When you lay the tape back down, the line persists. It&#8217;s a physical imprint of the turning point, the place where the peel reversed.</p>
<p>Reading the memories back requires nothing more than peeling past each line in turn. &#8220;We found that peeling the tape partway results in a line of strong adhesion at the stopping point that remains when you lay the tape back down,&#8221; Chattopadhyay says. Each line, encountered during readout, registers as a spike that roughly doubles the force needed to continue peeling. The lines show up in reverse order: last written, first read, an architecture that turns out to matter quite a lot for what the tape can actually compute.</p>
<p>But the feature that makes this discovery genuinely unusual, at least within the physics of material memory, is the direction of the driving. &#8220;Many materials or systems have a property called return-point memory that allows them to remember a sequence of events,&#8221; says Keim, an associate professor of physics. Combination locks work this way; so do ferromagnets, sandstone, and a surprisingly wide range of disordered materials. In all of them, memory formation relies on the input alternating back and forth, the dial turning clockwise then counterclockwise, the magnetic field cycling between poles. &#8220;We were interested if there was a system that could demonstrate this ability to remember a series of events without alternating the input,&#8221; Keim says. Tape can. The peeling is unidirectional; the tape only lifts, never pushes. That means tape operates on a fundamentally different principle from every other well-studied memory-forming material, and Keim&#8217;s team has proposed a new theoretical framework (they call it &#8220;latching&#8221;) to describe it.</p>
<h2>Tunable, Erasable, Possibly Useful</h2>
<p>&#8220;We found that we could store the sequence of multiple memories with a single-directional input in ordinary adhesive tape,&#8221; Keim says. &#8220;And not only that, but that the strength of the memories is tunable,&#8221; he says, &#8220;meaning we can adjust how strong the memories are, and they can be erased to reset the system.&#8221;</p>
<p>The tuning is worth dwelling on, because most memory-forming systems in physics don&#8217;t offer it. Hold the tape taut at its turning point for around a hundred seconds before laying it back down, and the memory becomes significantly stronger (the compressed adhesive zone has more time to bond). Strong enough, in some cases, to survive being read out, leaving a ghost of itself in the tape&#8217;s adhesive layer that persists across multiple peeling cycles. Change the substrate from the tape&#8217;s own backing to smooth acrylic, and the memory is stronger still. &#8220;Peeling past the lines erases them and resets the system,&#8221; Chattopadhyay says, &#8220;but we can also tune the strength of the memories, making them require different amounts of force to peel past, which means that each line could represent different information. We can even make some strong enough to persist after resetting the system.&#8221;</p>
<p>This is, more or less, the definition of a writable, readable, and partially erasable data storage medium. A mechanical one, requiring no power source, no silicon, no lithography. Keim is at pains to be realistic about the implications. &#8220;We don&#8217;t expect that these devices will be made with adhesive tape,&#8221; he says, &#8220;but we are driven by a desire to understand the fundamental science underlying the various types of memories that materials can form and how they might apply in future systems.&#8221; The motivation, as he frames it, is foundational: work out what physical principles allow materials to store information at all, and the applications may eventually suggest themselves.</p>
<h2>Computing Without a Circuit</h2>
<p>There&#8217;s already a hint of what those applications might look like. Because the last memory formed in the tape is always the first one encountered during readout, the tape can, in effect, compare any new input to the one immediately preceding it. If you peel to a greater distance than last time, the tape reads out that previous memory during the encoding of the new one, providing a kind of instant comparison. If the new input is smaller, the tape simply writes a new memory without disturbing the old one. &#8220;This fact allows a simple type of mechanical computation,&#8221; Keim says. &#8220;It&#8217;s similar to a test used for working memory in neuroscience, called a one-back comparison.&#8221; In that test, subjects are shown a stream of stimuli and asked to compare each one to the previous item in the sequence, a task considered a basic index of working memory function. Tape does something functionally equivalent. Mechanically. Passively.</p>
<p>The analogy is more than just decorative. A motile organism searching for the direction of a chemical gradient, for instance, could in principle use this kind of one-back comparison to determine whether the concentration it&#8217;s currently sensing is higher or lower than the last sample it took, without needing a continuous record of time. Whether soft-matter systems in nature already exploit something like this principle is an open question.</p>
<p>&#8220;There has long been an interest in developing devices that don&#8217;t need electricity and don&#8217;t have the same vulnerabilities as electronic computers,&#8221; Keim says. Mechanical computers predate their electronic successors by centuries, and the idea of building computation into materials rather than circuits has seen renewed interest as researchers explore robotics, soft actuators, and sensing in environments where conventional electronics are impractical. What tape offers, for now, is a model system: unusually simple, unusually transparent in its physics, and amenable to the kind of systematic study that lets you vary one parameter at a time and watch what changes. The physics of pressure-sensitive adhesives has been studied for decades in the context of fracture mechanics. The memory behavior, it turns out, was hiding in plain sight on laboratory benches worldwide. &#8220;As this understanding grows,&#8221; Keim says, &#8220;we may find ways to use it that we can&#8217;t yet imagine.&#8221;</p>
<p>Source: <a href="https://doi.org/10.1088/1367-2630/ae4acc">Chattopadhyay et al., <em>New Journal of Physics</em>, 2026. doi:10.1088/1367-2630/ae4acc</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Is this actually a new discovery, or have people known tape has memory for a while?</strong></p>
<p>The adhesive mechanics of tape have been studied for decades, but the specific memory behavior described here: multiple storable, tunable, erasable records encoded by unidirectional peeling: previously overlooked. What&#8217;s new isn&#8217;t the tape; it&#8217;s recognizing that its peeling physics constitute a distinct class of material memory operating on a different principle from everything else studied in the field.</p>
<p><strong>Could materials like this eventually replace electronic memory in some applications?</strong></p>
<p>Not tape itself, but the underlying principle could inform the design of mechanical memory systems that function without electricity. There&#8217;s genuine long-term interest in computation that doesn&#8217;t rely on silicon, particularly for soft robotics or sensing in environments where conventional electronics fail. The tape work establishes that purely mechanical systems can store and compare sequences of inputs, which is a foundational capability.</p>
<p><strong>How does the tape actually &#8220;read&#8221; its own memories?</strong></p>
<p>By measuring the force required to continue peeling. Each stored memory corresponds to a line of unusually strong adhesion in the tape, and when the peeling front crosses that line during readout, the force spikes noticeably, roughly doubling compared to baseline. The memories are read in reverse order of how they were written, which turns out to be what enables the one-back comparison computation.</p>
<p><strong>What&#8217;s the limit on how many memories tape can store at once?</strong></p>
<p>In principle, the capacity scales with the physical length of the tape, since each memory occupies a spatial region roughly a millimeter wide. The memories need to be separated enough that neighboring adhesion lines don&#8217;t interfere with each other. The researchers demonstrated reliable storage and retrieval of multiple memories in their experiments, but the theoretical upper limit for a given tape length remains a question for future work.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/sticky-tape-stores-memories-like-a-combination-lock-without-electricity/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575973</post-id>	</item>
		<item>
		<title>Life Built Its Biochemistry on a Metal That Was Almost Nowhere to Be Found</title>
		<link>https://scienceblog.com/life-built-its-biochemistry-on-a-metal-that-was-almost-nowhere-to-be-found/</link>
					<comments>https://scienceblog.com/life-built-its-biochemistry-on-a-metal-that-was-almost-nowhere-to-be-found/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Wed, 06 May 2026 11:54:00 +0000</pubDate>
				<category><![CDATA[Life & Non-humans]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575971</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>Molybdenum is a hard, silvery metal most people encounter only as an additive in steel alloys. Biologically, it does something remarkable: slotted into the active sites of enzymes, it bends the rules of what chemistry can do at body temperature, enabling reactions that fix nitrogen from the air, cycle sulfur through ocean water, and shuffle carbon between molecules at rates that, without it, would simply be too slow to sustain anything alive. Nearly every organism on Earth depends on it. Which makes the central puzzle of a new study in <em>Nature Communications</em> all the more striking: for most of life&#8217;s history, the metal was almost completely absent from the oceans.</p>
<p>Researchers at the University of Wisconsin-Madison have now traced molybdenum&#8217;s biological footprint back 3.4 billion years, to a time when the world&#8217;s seas held perhaps one-twenty-thousandth as much of it as they do today. The finding upends a tidy assumption in origins-of-life research, namely that early biochemistry was shaped primarily by what was plentiful, and opens a more unsettling possibility: that life does not merely adapt to its environment but latches on to whatever works, regardless of cost.</p>
<h2>A Paradox Written in Ancient Rock</h2>
<p>The case against molybdenum being a foundational element is, at first glance, compelling. The geochemical record preserved in black shales tells us that before about 2.45 billion years ago, there was essentially no oxygen in Earth&#8217;s atmosphere. That matters because molybdenum gets into seawater mainly through the oxidative weathering of sulfide minerals on land. No oxygen means almost no weathering, which means almost no dissolved molybdenum, which means ocean concentrations that modern biochemists would consider vanishingly trace. Aya Klos, a PhD student in bacteriology at UW-Madison and lead author of the paper, puts the paradox plainly. &#8220;What is kind of counterintuitive is that, according to the geochemical record, molybdenum abundance on the early Earth seems to have been a lot lower billions of years ago, particularly before the advent of oxygenic photosynthesis.&#8221; Yet for some reason, the enzymes built around it were already proliferating.</p>
<p>To pin down when, the team did something ambitious: they reconstructed the evolutionary history of more than 100 protein families involved in molybdenum and tungsten uptake, transport, cofactor biosynthesis, and catalysis across 1,609 genomes spanning the full sweep of known life. They then reconciled those family trees against independently dated species trees, using three different molecular clock models to bracket the uncertainty. The picture that emerged pushed the origin of molybdenum-dependent biochemistry back to somewhere between 3.7 and 3.1 billion years ago (the Eoarchean and Mesoarchean), geologically ancient territory.</p>
<p>Some of the oldest signals belong to the enzyme families that perform the widest range of jobs. The DMSOR and XO families, which together shuffle electrons through carbon, nitrogen, and sulfur chemistry, show gene events dating to that same deep window. The biosynthetic pathway that builds the molybdenum cofactor (the molecular cradle that holds the metal inside an enzyme, appears fully assembled by the Mesoarchean, around 3.1 billion years ago. A complete biochemical toolkit for molybdenum, operational well before oxygen changed everything.</p>
<h2>Tungsten as a Parallel Experiment</h2>
<p>Running alongside that story is a quieter one about tungsten, a metal that shares just enough chemistry with molybdenum to substitute for it in some enzymes, yet tends today to appear mainly in organisms living at extreme temperatures. The study suggests that early life was not betting solely on molybdenum; it was, in effect, running parallel experiments. Tungsten-specific transport systems appear to be at least as ancient as their molybdenum equivalents, and in strictly anaerobic environments, where oxygen never reaches, tungsten-based enzymes still dominate. The two metals may have carved out complementary niches from very early on, with tungsten handling lower-potential redox reactions under hot, anoxic conditions and molybdenum taking on the broader catalytic territory.</p>
<p>That the two metals were already being distinguished, transported, and incorporated into separate enzyme architectures billions of years before the atmosphere oxygenated suggests something about early biochemistry&#8217;s sophistication. These were not accidental associations, metal ions blundering into proteins by chance. They required dedicated uptake machinery, elaborate cofactor assembly lines, and enzymes tuned to exploit each metal&#8217;s particular electronic properties. Building all that under conditions of severe metal scarcity implies a selective pressure strong enough to sustain the investment.</p>
<p>One candidate source for what little molybdenum did exist: submarine hydrothermal vents, which can release dissolved molybdenum and molybdenum sorbed onto iron sulfide particles into surrounding seawater. The concentrations would have been local and probably patchy, but for microbial communities clustered around those vents, enough to make the biochemistry worth pursuing. After the Great Oxidation Event, when riverine input took over from vents as the dominant molybdenum delivery mechanism, the metal&#8217;s ocean concentration climbed orders of magnitude and the molecular record shows a corresponding burst of new molybdoenzyme diversity, as if the biochemical infrastructure that had been waiting was finally running at full capacity.</p>
<h2>Rethinking What Life Requires</h2>
<p>There is a longstanding and not unreasonable assumption in astrobiology that the elements life depends on should be abundant where life arose. Finding that something as central as molybdenum was essentially a trace contaminant during the critical window of early biochemical evolution complicates that picture considerably. Betül Kaçar, professor of bacteriology at UW-Madison and the paper&#8217;s senior author, draws out the implication directly. &#8220;This study shows that just because an element is scarce in the environment doesn&#8217;t mean life will not find a way to use it and even build an empire with it&#8230; Life works in surprising ways. Discoveries like this remind us that the search for life beyond Earth may require us to imagine possibilities we haven&#8217;t yet considered.&#8221;</p>
<p>The team notes some important caveats. Gene-tree-species-tree reconciliation methods carry inherent uncertainty; ancient gene events can be placed anywhere along a long branch, which means dates come with substantial error bars. New lineages and sequences, particularly from undercharacterized archaeal branches, may shift the picture. And there is always the possibility that ancient molybdoenzymes had structures or functions that no longer exist, making them difficult to identify through comparisons with modern proteins alone.</p>
<p>The researchers plan to continue examining how molybdenum actually moves through cells, tracking its intracellular trafficking to understand why life keeps reinvesting in metal-dependent chemistry even when metal supply is uncertain. Separately, they are interested in the peculiar late appearance of molybdenum storage proteins, which only show up in the fossil-gene record after the Great Oxidation Event. If the metal was scarce before oxygenation, why did organisms only evolve dedicated storage mechanisms once it became more available? Competition, perhaps: as more lineages gained access to higher molybdenum concentrations, the selective advantage of hoarding it may have intensified.</p>
<p>What all of this points toward is a picture of early biochemistry that was, in some ways, more audacious than we had assumed. Life did not simply make do with what was lying around in abundance. It reached for metals that were hard to find, built elaborate molecular infrastructure to capture and use them, and passed that infrastructure down through billions of years of subsequent evolution. The molybdenum in the enzyme that fixed the nitrogen in your breakfast this morning has an ancestry stretching back to anoxic oceans on a world without breathable air, built on scarcity, sustained by utility..</p>
<p><strong>Source:</strong> Klos AS, Sobol MS, Boden JS, et al. Biological use of molybdenum and tungsten stems back to 3.4 billion years ago. <em>Nature Communications</em> 17, 3943 (2026). <a href="https://doi.org/10.1038/s41467-026-72133-0">https://doi.org/10.1038/s41467-026-72133-0</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Why would early life evolve to depend on a metal that was almost nonexistent in the oceans?</strong></p>
<p>Molybdenum&#8217;s chemical versatility appears to have made it worth the difficulty of obtaining. When incorporated into enzymes, it can catalyze reactions spanning an unusually wide range of conditions and substrates, particularly the kinds of carbon, nitrogen, and sulfur cycling reactions that early life urgently needed. The researchers suggest that molybdenum&#8217;s redox flexibility gave organisms a selective advantage significant enough to justify building elaborate molecular machinery to capture even trace amounts of it, possibly from hydrothermal vents. The rest of life&#8217;s biochemistry then inherited that investment.</p>
<p><strong>Is tungsten just a backup for when molybdenum runs out?</strong></p>
<p>Not quite — the two metals appear to have had distinct, complementary roles from the very beginning. Tungsten-dependent enzymes tend to operate at lower redox potentials and perform better at high temperatures, which is why they still dominate in thermophilic archaea living in extreme environments today. The new study suggests that ancient life was essentially running parallel experiments with both metals, not simply treating one as a substitute for the other. Whether tungsten preceded molybdenum in the earliest organisms, or whether the two were adopted simultaneously, remains an open question.</p>
<p><strong>Does this change how scientists think about finding life on other planets?</strong></p>
<p>It should prompt some recalibration. A common assumption in astrobiology is that life on other worlds would require abundances of the same elements life here uses. This study suggests that life can build sophisticated, long-lasting biochemical dependencies on elements that are nearly absent from its environment. A planet with very little molybdenum in its oceans is not, on that basis alone, ruled out as a potential home for complex biochemistry. The search for life elsewhere may require broader thinking about which elements count as prerequisites and which are merely convenient.</p>
<p><strong>How do researchers actually trace which metals life was using 3.4 billion years ago?</strong></p>
<p>The team used a technique called gene-tree-species-tree reconciliation. By mapping the evolutionary relationships of more than 100 protein families involved in molybdenum and tungsten metabolism across 1,609 modern genomes, then matching those family trees against independently dated species trees, they could estimate when various genes first appeared. Multiple molecular clock models were used to bracket the uncertainty, since there is no single agreed method for deep-time dating. The results consistently pointed to the Eoarchean and Mesoarchean, between roughly 3.7 and 3.1 billion years ago, as when the core molybdenum biochemical toolkit was assembled.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/life-built-its-biochemistry-on-a-metal-that-was-almost-nowhere-to-be-found/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575971</post-id>	</item>
		<item>
		<title>Rooftop Solar Is Overwhelming Local Grids. Community Batteries Could Fix That</title>
		<link>https://scienceblog.com/rooftop-solar-is-overwhelming-local-grids-community-batteries-could-fix-that/</link>
					<comments>https://scienceblog.com/rooftop-solar-is-overwhelming-local-grids-community-batteries-could-fix-that/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Wed, 06 May 2026 11:49:00 +0000</pubDate>
				<category><![CDATA[Earth, Energy & Environment]]></category>
		<category><![CDATA[Social Sciences]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575969</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>Somewhere in Victoria, Australia, a neighborhood wakes up, plugs in its electric vehicles, and the local grid quietly buckles. No blackout, nothing dramatic. Just a slow voltage sag that engineers call undervoltage, the kind of thing that stresses transformers and shortens equipment lifetimes rather than making headlines. By mid-morning the same street&#8217;s solar panels are pumping excess power back into the grid, and the problem flips: now there&#8217;s too much voltage. The grid that was supposed to benefit from all this clean energy is, in a technical sense, struggling to cope with it.</p>
<p>This is the paradox sitting at the heart of the renewable transition. The technologies we&#8217;ve deployed to fix the electricity system are, in certain respects, making parts of it more fragile.</p>
<p>A new study from Deakin University, published this month in IET Renewable Power Generation, has tried to quantify exactly what high concentrations of rooftop solar and electric vehicles do to the low-voltage distribution networks that serve homes and businesses. These are the final-stage lines, the ones running from neighborhood transformers to your meter box, that most grid planning has historically treated as an afterthought. They weren&#8217;t designed for two-way power flows. They weren&#8217;t designed for fleets of EVs all charging at midnight. And they weren&#8217;t, frankly, designed for the pace at which households have been adopting both.</p>
<h2>A Grid Built for One Direction</h2>
<p>The researchers modeled a real distribution network in Victoria and ran it through scenarios of increasing solar and EV penetration, watching what happened to voltages across the day. The pattern that emerged was almost clockwork in its regularity. Midday: solar generation peaks, more power flows back up the line than the network was engineered to handle, and voltages rise beyond acceptable limits. Midnight: the solar has switched off, but the EVs are charging, demand spikes, and voltages drop. Rinse and repeat.</p>
<p>The study assessed several ways to address this. You can curtail the solar output when the grid gets stressed, essentially telling panels to stop generating even when the sun is shining. You can install smart inverters capable of managing reactive power, which helps regulate voltage without actually storing anything. Neither is particularly satisfying. Curtailment wastes clean energy. Smart inverters help at the margins but can&#8217;t compensate for the scale of the mismatch.</p>
<p>Which leaves battery storage. The team looked at batteries at two levels: individual household installations (the kind of thing you might buy to pair with your rooftop solar) and community-scale systems that serve multiple homes from a single, larger unit. Both worked. But community-scale storage turned out to be roughly 52% more cost-effective than the household approach, a gap significant enough to carry real policy weight. &#8220;Cleaner energy brings new grid challenges, making coordinated storage essential for voltage stability,&#8221; said Khalil Gholami, who led the research.</p>
<p>The reason for that cost gap isn&#8217;t hard to follow. A battery serving a single house has to be sized for that house&#8217;s worst case: the night it&#8217;s coldest, the EV is charging, and no solar has been generated for two days. A community battery serving 50 houses doesn&#8217;t need to prepare for all 50 worst cases simultaneously; it can smooth across the variation of its users, storing and releasing as the aggregate demand requires. It&#8217;s the insurance-pool logic applied to kilowatt-hours.</p>
<h2>The Aggregation Advantage</h2>
<p>There&#8217;s something worth sitting with here. Batteries have generally been sold to households as individual purchases, a kind of personal energy resilience product you bolt to your garage wall. The Deakin study suggests that framing might be economically backward, at least when you&#8217;re thinking about grid health rather than individual energy bills. The community-scale approach requires coordination, probably some kind of shared ownership or utility involvement, and those things are administratively harder than simply selling people a box. But the numbers are fairly unambiguous about where the efficiency lies.</p>
<p>The study doesn&#8217;t resolve questions about who pays for community batteries, how they&#8217;re governed, or whether existing energy market rules even accommodate shared storage assets of this kind. Most regulatory frameworks weren&#8217;t written with this model in mind. In parts of Australia, the UK, and the US, rules around feed-in tariffs, network charges, and revenue stacking are still catching up to the reality of what distributed storage could do.</p>
<p>There are also limits to what any study of a single Victorian network can tell you. Different climates mean different solar profiles. Different car ownership patterns mean different overnight charging loads. A suburb with high apartment density and fewer rooftop panels faces a different version of this problem than a sprawling low-density area where almost everyone has solar. The broad finding, that community storage outperforms household storage on cost-effectiveness, is likely to hold across many settings, but the exact numbers will vary.</p>
<h2>What the Grid Actually Needs</h2>
<p>What the research does do is shift attention from generation to management. For most of the renewable transition, the central question has been how to build enough solar, wind, and other clean generation capacity to replace fossil fuels. That&#8217;s still crucial. But as penetration rises, a second question is becoming unavoidable: how do you operate a grid full of devices that all respond to the same weather, charge at the same time of night, and weren&#8217;t designed to cooperate? The answer, the Deakin team argues, involves storage systems sized and positioned at the community level, operating not for any single household&#8217;s benefit but for the grid&#8217;s collective stability.</p>
<p>In Victoria&#8217;s suburbs, the daily voltage cycle continues. Solar panels wake with the sun. EVs charge through the night. And somewhere between those two rhythms, there&#8217;s a gap that a well-placed battery, or perhaps a thousand of them, might eventually fill.</p>
<p>Source: <a href="https://doi.org/10.1049/rpg2.70244">Gholami et al., IET Renewable Power Generation, 2026. doi:10.1049/rpg2.70244</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Why do solar panels cause voltage problems on the grid?</strong></p>
<p>When a large number of homes generate solar power simultaneously, the electricity flows back up distribution lines that were designed to carry power in only one direction. This reverse flow pushes voltages above safe operating limits, a condition called overvoltage. The more solar panels on a street or suburb, the more pronounced the effect during peak sunshine hours.</p>
<p><strong>Why is community-scale battery storage so much cheaper than individual household batteries?</strong></p>
<p>A community battery serves many homes at once and can smooth out variation in their energy use. Rather than sizing a battery for each household&#8217;s worst-case demand, a shared system handles the average across dozens or hundreds of users, who rarely all hit their peak demand at the same moment. This pooling effect means less total battery capacity is needed to deliver the same level of grid protection, cutting costs by around 52% compared with equivalent household installations, according to the Deakin University study.</p>
<p><strong>Can&#8217;t smart inverters solve the voltage problem without batteries?</strong></p>
<p>Smart inverters can help by managing reactive power, which provides some voltage regulation, but they can&#8217;t compensate fully for the scale of the mismatch between generation and demand in heavily electrified neighborhoods. They work best as a complement to storage, not a replacement for it.</p>
<p><strong>Does this research apply outside Australia?</strong></p>
<p>The specific findings come from a real distribution network in Victoria, so the exact figures won&#8217;t translate directly to every grid. However, the underlying dynamics, midday overvoltage from solar and overnight undervoltage from EV charging, are common to any low-voltage network with high renewable penetration. The conclusion that community-scale storage is more cost-effective than household storage is likely to hold broadly, though the size of the advantage will vary by location.</p>
<p><strong>What is stopping wider adoption of community battery systems?</strong></p>
<p>The main barriers are regulatory and commercial rather than technical. Energy market rules in many countries were written before shared storage assets existed, and questions about ownership, revenue sharing, and network charges haven&#8217;t been fully resolved. The technology works; the governance frameworks to deploy it at scale are still catching up.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/rooftop-solar-is-overwhelming-local-grids-community-batteries-could-fix-that/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575969</post-id>	</item>
		<item>
		<title>Honor, Shame, and a New Diagnosis for an Ancient Dread</title>
		<link>https://scienceblog.com/honor-shame-and-a-new-diagnosis-for-an-ancient-dread/</link>
					<comments>https://scienceblog.com/honor-shame-and-a-new-diagnosis-for-an-ancient-dread/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Wed, 06 May 2026 11:47:31 +0000</pubDate>
				<category><![CDATA[Brain & Behavior]]></category>
		<category><![CDATA[Social Sciences]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575966</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>Shame is something most of us think we understand. You do something wrong, or embarrassing, and the feeling arrives: a hot flush of self-consciousness, the impulse to look away. What psychology has been slower to reckon with is a different creature entirely. Not shame itself, but the anticipatory terror of it. The paralysing, chronic dread of being seen as shameless, of having your family&#8217;s honor shredded in the eyes of a community that will not forget. For hundreds of millions of people living inside collectivist cultures where honor functions as a kind of social currency, this fear is perhaps the most powerful force shaping daily life. And until now, there has been no clinical name for it, no validated tool to measure it, no way to study it systematically.</p>
<p>Waqar Husain, a psychologist at COMSATS University Islamabad, wants to change that. He and his colleagues have spent the past several years developing a concept they call atimiaphobia: an intense, fear-based psychological condition rooted in honor cultures and shame societies. The word is built from the Greek atimia, meaning dishonor or disgrace, and the Atimiaphobia Scale they have now validated in the journal PsyCh Journal is the first instrument designed to measure this specific cluster of fears at the individual level.</p>
<h2>Honor as Architecture, Not Sentiment</h2>
<p>To understand what atimiaphobia actually is, it helps to understand what honor actually does in the cultures where it operates most powerfully. In much of South Asia, the Middle East, and parts of Africa, honor is not really a feeling. It is closer to a credit rating, held jointly by a family, visible to an entire community, and difficult to rebuild once damaged. This is not sentimentalism; it is the structural logic of societies where, historically, state protection was weak and reputation was the primary guarantor of safety and cooperation. The rules for maintaining honor differ sharply by gender. Women, in these frameworks, carry a heavier burden; their conduct, choices, and visibility are more tightly regulated, their lapses more publicly consequential.</p>
<p>What Husain&#8217;s team set out to capture was not honor itself, but the psychological weight of fearing its loss. That is a subtler thing. Atimiaphobia, as they conceptualize it, has four distinct dimensions: fear of being labeled shameless, fear of violating social norms, fear of public judgment, and fear of losing self-respect and honor. These are not interchangeable. Through higher-order factor analysis of responses from 1,232 participants in Islamabad, the researchers found that fear of losing self-respect and honor was the most psychologically central of the four, the one that seems to anchor the whole construct.</p>
<p>The scale they developed, the Atimiaphobia Scale (AtiPhoS), has 15 items rated on a five-point response format. Sample statements give a sense of the phenomenology: &#8220;I have an intense fear of being labeled shameless.&#8221; &#8220;I constantly worry that public opinion will determine my worth.&#8221; &#8220;I live in fear of losing my self-respect in front of others.&#8221; Taken together, these items form a portrait of a mind in a particular kind of chronic vigilance, scanning its social environment not for danger in the physical sense but for reputational threat.</p>
<h2>Who Carries This Fear Most</h2>
<p>The demographic patterns that emerged from the validation data are striking, and perhaps unsurprising. Women reported substantially higher atimiaphobia than men across every subscale, with effect sizes large enough to be clinically meaningful rather than merely statistically significant. Married individuals scored higher than unmarried ones. Atimiaphobia also increased with age, which the researchers interpret as consistent with the tendency for people to align more closely with traditional cultural values as they grow older.</p>
<p>&#8220;The distinctiveness of atimiaphobia warrants recognition as a discrete mental health condition within clinical diagnostic frameworks,&#8221; Husain said.</p>
<p>Whether or not that clinical recognition comes, the correlational data give the concept considerable heft. Higher atimiaphobia scores were associated with more anxiety, more shame, and, interestingly, lower social intelligence. That last finding cuts against a naive expectation. You might reckon that someone acutely tuned in to social reputation would be more socially adept. But the data suggest the opposite: hypervigilance about judgment appears to interfere with the flexible, adaptive social reasoning that constitutes genuine social intelligence. The fear, in other words, becomes its own obstacle.</p>
<p>There is a harder implication too. People high in atimiaphobia may avoid seeking mental health support precisely because doing so could be read as a kind of shame in itself, as evidence of weakness or disorder that others might learn about. The social architecture that generates the fear is the same architecture that blocks its treatment.</p>
<h2>The Gap This Fills</h2>
<p>Cross-cultural psychology has long had tools for measuring shame, anxiety, and fear of negative evaluation. What it has lacked is an instrument specific to the honor-shame nexus, one that captures the way these fears extend beyond the self to encompass family and community standing. Existing shame scales treat shame as an individual emotion. Atimiaphobia, by contrast, is structurally relational: the fear is not just of feeling bad about yourself but of what your conduct will mean for everyone associated with you. The AtiPhoS tries to hold that distinction.</p>
<p>The study was conducted exclusively in Pakistan, which the authors acknowledge as a limitation. Whether the construct translates cleanly to other honor cultures, or whether it means something different in, say, a Japanese or Turkish or Yemeni context, remains an open question. The scale is in English, which adds another layer of restriction. And the researchers are careful to stress that atimiaphobia is not yet a clinical diagnosis, only a well-validated psychological construct that may, over time, earn that status.</p>
<p>What seems harder to dispute is the basic contention: that hundreds of millions of people live with a culturally specific form of fear that Western psychological frameworks have, so far, not found a language for. If atimiaphobia eventually enters the clinical lexicon, it will not be because researchers discovered something new so much as because they finally built a precise enough lens to look at something that was already there.</p>
<hr />
<p>DOI: <a href="https://doi.org/10.1002/pchj.70095">https://doi.org/10.1002/pchj.70095</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Is atimiaphobia just another word for social anxiety?</strong></p>
<p>Not quite, and the distinction matters. Social anxiety is typically about fear of embarrassment or poor performance in social situations. Atimiaphobia is specifically tied to honor, moral reputation, and family standing within collectivist cultures, where a person&#8217;s conduct is understood to reflect on their entire family rather than themselves alone. The fear extends beyond individual humiliation to something closer to collective disgrace.</p>
<p><strong>Why do women score so much higher on atimiaphobia than men?</strong></p>
<p>The research points to gendered socialization within honor cultures, where women are typically held to stricter behavioral and moral standards than men, and where lapses in female conduct are more publicly scrutinized and more consequential for the family&#8217;s reputation. This creates a disproportionate burden that appears to translate directly into higher levels of this specific fear. The effect size found in the study was large enough to be clinically meaningful, not just a statistical footnote.</p>
<p><strong>Could atimiaphobia explain why people in some cultures avoid mental health treatment?</strong></p>
<p>That is one of the more significant implications the researchers raise. Seeking psychological help might itself be interpreted as evidence of weakness or disorder, potentially visible to others and damaging to social standing. If the fear of dishonor is strong enough, it may actively prevent people from accessing support that could reduce their distress, creating a kind of trap where the cultural mechanism generating the fear also blocks its relief.</p>
<p><strong>Is this a new condition, or just a newly named one?</strong></p>
<p>Almost certainly the latter. The psychological experience Husain and colleagues are describing has presumably existed for as long as honor cultures have. What is new is the attempt to operationalize it as a measurable construct, give it a name, and develop a validated scale so researchers can study its intensity and consequences systematically. The researchers explicitly do not claim it as a diagnostic category yet, though they think it may eventually warrant that recognition.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/honor-shame-and-a-new-diagnosis-for-an-ancient-dread/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575966</post-id>	</item>
		<item>
		<title>How Scientists Are Teaching Microbes to Make Our Medicines From Scratch</title>
		<link>https://scienceblog.com/how-scientists-are-teaching-microbes-to-make-our-medicines-from-scratch/</link>
					<comments>https://scienceblog.com/how-scientists-are-teaching-microbes-to-make-our-medicines-from-scratch/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Wed, 06 May 2026 11:42:48 +0000</pubDate>
				<category><![CDATA[Health]]></category>
		<category><![CDATA[Life & Non-humans]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575964</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>Yeast has been making things for humans for thousands of years. Bread, beer, wine, the slow alchemy of fermentation that civilisations were built around. What it couldn&#8217;t do, until recently, was read a genetic instruction manual several hundred thousand letters long and use it to synthesise a cancer drug from scratch. That&#8217;s changing. And the reason it&#8217;s changing comes down to a deceptively simple engineering problem: how do you write enough DNA, quickly enough, without making mistakes, to give a microbe an entirely new job?</p>
<p>The answer, it turns out, involves lessons borrowed from chip manufacturing, a yeast cell&#8217;s extraordinary talent for self-repair, and the occasional synthetic chromosome that doesn&#8217;t exist in nature.</p>
<p>A review published this week in <em>Quantitative Biology</em> maps out just how far the field has come. The authors, led by Yue Shen of BGI Research in China, describe a technology that has scaled from assembling individual genes to stitching together entire chromosomes spanning millions of genetic letters. The goal is what researchers call a microbial cell factory: a bacterium or yeast strain redesigned at the genomic level to produce pharmaceuticals, biofuels, or industrial chemicals more efficiently than conventional synthesis allows. The appeal is obvious. Fermentation tanks don&#8217;t require oil wells, and a well-designed microbe can manufacture complex molecules that no chemist could realistically synthesise by hand.</p>
<p>Getting there, though, requires inserting very large amounts of new genetic material into cells without breaking them.</p>
<p>The first generation of genetic engineering worked with relatively small DNA fragments, maybe a gene or two at a time. The biosynthetic pathways that produce useful compounds are rarely that tidy. Taxol, the cancer drug originally derived from Pacific yew bark, involves a cascade of enzymatic reactions encoded across dozens of genes drawn from plants, bacteria, and fungi. Assembling that kind of pathway means joining DNA fragments tens of thousands of base pairs long, and doing it without introducing errors that would derail the whole metabolic sequence. Until recently, that was genuinely hard.</p>
<h2>Stitching Genomes Together, Piece by Piece</h2>
<p>The breakthrough came from several directions at once. Gibson assembly, developed in 2009, offered a way to join multiple overlapping DNA fragments in a single reaction, at a constant temperature, without needing to worry about restriction sites getting in the way. It sounds almost too straightforward, but the method scaled surprisingly well, enabling constructs exceeding 900 kilobases. Around the same time, researchers realised that yeast cells, with their unusually aggressive DNA repair machinery, could be persuaded to assemble large fragments on their own just by delivering overlapping pieces and letting the cell do the joining. The transformation-associated recombination method, exploiting this property, eventually enabled assembly of DNA sequences up to 600 kilobases long.</p>
<p>These weren&#8217;t just incremental improvements. They opened up entire classes of compound that had been effectively off-limits to biotechnology.</p>
<p>What&#8217;s happened since is harder to summarise neatly, because the field has branched in several directions simultaneously. One branch has focused on inserting whole biosynthetic gene clusters (sometimes called BGCs) into microbes that would never naturally produce the compound in question. A team working on the antibiotic corbomycin, for instance, managed to capture a 76-kilobase gene cluster from its original bacterial host and transfer it into yeast for stable expression, achieving a 19-fold increase in final yield over previous methods. The key innovation was a more robust capture system that could grab the relevant stretch of DNA from a messy genomic background without losing it. These clusters can run to hundreds of kilobases; capturing them intact, then getting them functioning in a foreign host, is roughly analogous to transplanting not just an organ but the entire metabolic context it needs to work.</p>
<p>A second branch has gone further still, into what the field calls synthetic genome assembly. The idea here isn&#8217;t merely to add new DNA but to rewrite the organism&#8217;s existing genome from scratch, codon by codon, and in doing so create properties that couldn&#8217;t exist in any wild type. The Synthetic Yeast Genome Project (Sc2.0) is perhaps the most ambitious example: a global consortium working toward a fully synthetic <em>Saccharomyces cerevisiae</em> genome, with each chromosome replaced by a designed alternative containing thousands of embedded recombination sites. Activate those sites with the right enzyme, and the genome reshuffles itself, generating a population of variant strains at a speed no conventional mutagenesis programme could match. In one experiment, five cycles of this shuffling produced a yeast strain with 38.8 times the carotenoid output of the parent. In another, strains emerged capable of tolerating 8 percent ethanol concentrations, a property with obvious relevance to industrial fermentation.</p>
<h2>Chromosomes You Can Design From Scratch</h2>
<p>The most recent development, and perhaps the strangest, is the neochromosome: an artificial chromosome assembled de novo, with no natural template, that sits alongside the organism&#8217;s existing genome as a kind of biological expansion slot. The appeal is that you can load a neochromosome with large metabolic pathway modules without disrupting the native chromosomal architecture. A team at BGI recently used a method called HAnDy to assemble a neochromosome just over a megabase long, containing 542 exogenous genes, with an assembly efficiency of around 60 percent. When introduced into six phylogenetically distinct yeast strains, it improved their tolerance to temperature stress, osmotic pressure, heavy metals, and expanded the range of carbon sources they could metabolise. Nine putative novel metabolites appeared that hadn&#8217;t been detectable before. The neochromosome, in effect, gave ordinary lab yeast a set of capabilities it would have taken millions of years of evolution to acquire naturally, if it ever did.</p>
<p>The limitation, for now, is that all of this works most reliably in a small number of model organisms. <em>E. coli</em> and <em>S. cerevisiae</em> between them have attracted the bulk of the methodological innovation, partly because their biology is so well characterised and partly because that&#8217;s where the tools were developed first. Extending the same approaches to other industrially interesting hosts, like <em>Yarrowia lipolytica</em> or <em>Trichoderma reesei</em>, remains genuinely difficult. Interspecies transfer of large DNA assemblies is still inefficient; the biology of how cells accept, maintain, and express foreign chromosomes isn&#8217;t fully understood even in yeast.</p>
<p>Shen is sanguine about the trajectory, even so. &#8220;As large DNA assembly technologies increasingly integrate with automated platforms and AI-driven design,&#8221; he says, &#8220;the development cycle of microbial cell factories is poised to accelerate dramatically.&#8221; The specific promise he&#8217;s pointing to is a design-build-test-learn cycle in which AI proposes pathway designs, robotic platforms assemble them, and sequencing confirms the result, all without the rate-limiting step of human hands at the bench. &#8220;This technological leap is unlocking their true potential as practical, sustainable platforms for global biomanufacturing.&#8221;</p>
<p>Whether the leap arrives on the timescale the field is hoping for depends on how quickly the bottlenecks outside the laboratory resolve themselves. The cost of DNA synthesis has dropped precipitously over the past decade and continues to fall; microchip-based parallel synthesis is beginning to push the economics further. Regulatory frameworks for releasing redesigned microbes into industrial settings are still catching up with what the science can do. And there&#8217;s a subtler question sitting underneath all of it: how much of the biological complexity that makes these organisms useful can actually be designed in advance, and how much will always need to be discovered through iteration? Yeast has been surprising us for millennia. Probably it isn&#8217;t finished yet.</p>
<hr />
<p>Source: Zhang, Y., et al. (2026). Advances in large DNA fragment assembly for microbial cell factory engineering. <em>Quantitative Biology</em>. <a href="https://doi.org/10.1002/qub2.70039">https://doi.org/10.1002/qub2.70039</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Could engineered microbes actually replace conventional chemical manufacturing for things like medicines and fuels?</strong></p>
<p>For some compounds, they already do. Microbial fermentation now produces a significant portion of the world&#8217;s insulin, and biosynthetic routes to complex molecules like artemisinin (a malaria drug) have replaced plant extraction at commercial scale. The challenge is that each compound requires its own tailored pathway, and building those pathways reliably in microbes is still technically demanding, though advances in large DNA assembly are narrowing that gap substantially.</p>
<p><strong>What exactly is a biosynthetic gene cluster, and why is it so hard to transplant?</strong></p>
<p>A biosynthetic gene cluster is a stretch of DNA, often spanning tens to hundreds of thousands of base pairs, that encodes all the enzymes needed to produce a particular compound, usually an antibiotic or other natural product. The difficulty in transplanting it isn&#8217;t just length; it&#8217;s that the cluster evolved to work within a specific cellular context, with particular regulatory signals and metabolic precursors. Capturing the whole thing intact and getting it to function in a foreign host requires both precise DNA assembly and substantial rewiring of the host&#8217;s own metabolism.</p>
<p><strong>Is rewriting an organism&#8217;s entire genome actually safe?</strong></p>
<p>One unexpected benefit of genome recoding is that it can make organisms safer, not more dangerous. Strains with reassigned genetic codons become unable to read viruses written in the standard code, conferring broad phage resistance and reducing the risk of genetic information leaking into wild populations. Researchers have framed this as a &#8220;genetic firewall,&#8221; though the regulatory and biosafety frameworks around deliberate genome-scale rewriting are still catching up with what the science can now do.</p>
<p><strong>What&#8217;s a neochromosome, and how is it different from just adding genes to an existing chromosome?</strong></p>
<p>A neochromosome is an entirely artificial chromosome assembled from scratch and introduced into a cell as a supernumerary addition, sitting alongside the organism&#8217;s existing genome rather than replacing any part of it. This matters because inserting large gene clusters into native chromosomes can disrupt neighbouring genes and destabilise replication. A neochromosome sidesteps that problem by providing a dedicated, modular platform for expressing new pathways, and in principle it can be swapped in or out like a biological expansion module.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/how-scientists-are-teaching-microbes-to-make-our-medicines-from-scratch/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575964</post-id>	</item>
		<item>
		<title>When the Rains Fail, the Big Gangs Win: How Climate Chaos Reshapes Capuchin Society</title>
		<link>https://scienceblog.com/when-the-rains-fail-the-big-gangs-win-how-climate-chaos-reshapes-capuchin-society/</link>
					<comments>https://scienceblog.com/when-the-rains-fail-the-big-gangs-win-how-climate-chaos-reshapes-capuchin-society/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Wed, 06 May 2026 11:36:38 +0000</pubDate>
				<category><![CDATA[Earth, Energy & Environment]]></category>
		<category><![CDATA[Life & Non-humans]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575961</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>By January, the rivers in Costa Rica&#8217;s Guanacaste province have shrunk to threads. The tropical dry forest, which looked improbably lush just weeks before, starts shedding leaves in fistfuls, and the shade that made it navigable in the wet months begins disappearing overhead. For the white-faced capuchin monkeys picking their way through this landscape, the timing matters enormously. A researcher with a GPS unit clipped to her backpack follows at close range, logging sleep sites, counting fruit bites, tracking which direction the group moves when it runs into its neighbors. She has been doing this, more or less, since 1990.</p>
<p>Susan Perry, an anthropologist at UCLA who has led the Lomas Barbudal Monkey Project in Costa Rica for 35 years, has accumulated something genuinely rare: a longitudinal record of 12 neighboring capuchin groups spanning three decades, complete with demographic censuses, satellite imagery and enough behavioral data to start asking questions that shorter studies simply can&#8217;t reach.</p>
<p>The question she and her colleagues at Germany&#8217;s Max Planck Institute of Animal Behavior set out to answer, published this week in <em>Nature Ecology and Evolution</em>, is deceptively simple. Is it better to live in a big group or a small one? The answer, it turns out, depends almost entirely on what the weather has been doing lately.</p>
<h2>The Arithmetic of Group Living</h2>
<p>Every capuchin group faces a version of the same trade-off. More individuals means more allies in territorial disputes, more eyes scanning for predators, more muscle when a neighboring group wanders too close. But more individuals also means more mouths eating from the same patch of fruiting trees, which depletes those patches faster and forces everyone to range further to find food. Ecologists call the first problem scramble competition: not direct fighting, just the collective exhaustion of shared resources. For decades, the standard prediction has been that larger groups should travel further each day to compensate. Perry&#8217;s data says otherwise.</p>
<p>&#8220;It seems that larger groups compensate for the larger number of mouths to feed not by traveling further each day, but by having a larger variety of resources they can visit, which allows them to visit less depleted food patches,&#8221; Perry said.</p>
<p>The mechanism, when you look at the data, is territorial expansion rather than extended daily marches. Bigger groups gradually push their home ranges outward, claiming ground from smaller neighbors. In the dataset, which tracked 335 individually identified monkeys across more than 900 dyadic group comparisons, the pattern was stark: when a neighboring group grew larger relative to a focal group, it encroached on the focal group&#8217;s range. In 84% of cases where overlap increased substantially, the group that had become relatively bigger was the one doing the encroaching. Smaller groups, roughly, got squeezed.</p>
<h2>Dry Season: The Pressure Builds</h2>
<p>The dry season is where things get politically complicated. As water and food concentrate along the rivers, every group gets funneled toward the same strips of evergreen riparian forest. Home range overlap between groups actually decreases in the dry months (they&#8217;re fighting over the good patches rather than casually sharing them), but encounter rates go up. Groups are running into each other more often per unit of shared space than during the wet season. Larger groups tend to end up occupying the highest-quality riverside areas; smaller groups get pushed to the scrappier parts of the forest. Direct resource defence, it seems, becomes the dominant game precisely when there&#8217;s something worth defending.</p>
<p>What the 33 years of data captured that shorter studies couldn&#8217;t is how this seasonal arithmetic gets scrambled by El Niño and La Niña. Both climate cycles, which periodically push Guanacaste&#8217;s weather toward extremes, amplified within-group competition for larger groups. An exceptionally dry dry season, or an unusually waterlogged wet one, made the foraging disadvantage of being large significantly worse. &#8220;Long-term data sets such as this one are so valuable scientifically that they make the hardships seem worthwhile,&#8221; Perry said. The hardships are literal: a 12 or 13-hour day following monkeys through difficult terrain, year after year, to capture conditions that might happen once a decade.</p>
<p>Intermediate anomalies, curiously, told a different story. When climatic conditions partially counterbalanced the typical seasonal pattern (a wet season that ran drier than average, or a dry season with unexpected rainfall), the foraging penalty for large groups largely disappeared and their territorial advantage over smaller neighbors seemed to sharpen. The researchers speculate that intermediate climate fluctuations may increase patchiness in habitat quality in ways that large groups, with their numerical muscle, can exploit more effectively than small ones can.</p>
<h2>A Buffer That Has Limits</h2>
<p>The findings sketch a picture of capuchin social structure as something more dynamic than a fixed optimum. Large groups endure the costs of internal competition partly by bullying smaller neighbors out of better foraging grounds. Small groups persist by reducing their internal competition load, staying close to their core areas and exploiting the gaps that larger groups leave between each other (the territorial equivalent of buffer zones, not unlike the underused ground between rival wolf pack territories). Neither strategy is unconditionally superior.</p>
<p>&#8220;But under climatic extremes, that buffer reaches its limits, and monkeys may adjust by making changes to group size, for example, by dispersing to other groups,&#8221; Perry noted. The nine permanent group fissions recorded over the study period probably represent precisely those moments: when conditions pushed a large group past the point where its territorial advantages could offset the metabolic costs of all those extra mouths.</p>
<p>El Niño and La Niña are not new; capuchins have presumably been navigating their rhythms for a long time. What is new is the expectation that these cycles will intensify as the climate warms, compressing or eliminating the intermediate conditions that seem to buffer large groups against their own size. Whether that tilts the evolutionary balance toward smaller groups, triggers more frequent fissions or reshapes social structures in ways we can&#8217;t yet predict is a question that, Perry would probably tell you, requires another few decades of following monkeys into the dry season with a GPS unit clipped to your backpack.</p>
<p><strong>DOI:</strong> <a href="https://doi.org/10.1038/s41559-026-03048-8">https://doi.org/10.1038/s41559-026-03048-8</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Why don&#8217;t larger capuchin groups just travel further each day to find more food?</strong></p>
<p>The intuitive prediction is that bigger groups, which deplete patches faster, should cover more ground daily. But the Lomas Barbudal data shows they don&#8217;t. Instead, larger groups expand their total home range over time, visiting a broader variety of foraging sites over longer timescales rather than extending individual days. This strategy probably minimises the physiological costs of locomotion while still giving groups access to fresher, less-depleted patches.</p>
<p><strong>How do smaller capuchin groups survive being surrounded by larger, more dominant neighbors?</strong></p>
<p>Smaller groups benefit from two things the data highlight. First, they face less within-group competition for food, since there are fewer mouths sharing each patch. Second, large groups tend to avoid each other&#8217;s territories, creating underused buffer zones that smaller groups can exploit without direct confrontation. It&#8217;s a niche carved out by the mutual wariness of the bigger players rather than any particular strength of the small group itself.</p>
<p><strong>Is El Niño the same thing as climate change?</strong></p>
<p>No. El Niño and La Niña are natural, cyclical fluctuations in Pacific sea surface temperatures that have been occurring for millennia. What climate change is expected to do is make these cycles more intense and perhaps more frequent, extending the most extreme conditions and potentially shrinking the intermediate phases that the Lomas Barbudal study suggests are relatively benign for large capuchin groups.</p>
<p><strong>Why does this research require 33 years of data rather than a shorter study?</strong></p>
<p>Many of the patterns the team identified only become visible across multiple climate cycles. A three- or five-year study would almost certainly miss an El Niño event, let alone the interaction between ENSO extremes and group size. The 33-year record also captured nine permanent group fissions, rare events that represent the breaking point for oversized groups under ecological pressure. Without that longitudinal depth, such events would look like statistical noise rather than an ecologically meaningful threshold.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/when-the-rains-fail-the-big-gangs-win-how-climate-chaos-reshapes-capuchin-society/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575961</post-id>	</item>
		<item>
		<title>Weight Loss Operation Outperforms Wonder Drug</title>
		<link>https://scienceblog.com/weight-loss-operation-outperforms-wonder-drug/</link>
					<comments>https://scienceblog.com/weight-loss-operation-outperforms-wonder-drug/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Wed, 06 May 2026 11:34:29 +0000</pubDate>
				<category><![CDATA[Brain & Behavior]]></category>
		<category><![CDATA[Health]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575959</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>The meds arrived with considerable fanfare. Semaglutide, tirzepatide, the whole GLP-1 family: drugs that could, apparently, do something medicine had never managed well before. Shrink the body. Tame hunger. Dissolve, at least partially, decades of metabolic damage. Doctors prescribed them in their millions. Patients lost weight. Pharma stocks soared. It seemed, for a moment, like the obesity problem might finally have a pharmaceutical answer.</p>
<p>Two studies presented this week at the American Society for Metabolic and Bariatric Surgery annual meeting complicate that story considerably. Taken together, they represent one of the largest and most direct comparisons of GLP-1 drugs and bariatric surgery ever assembled, and the surgery wins. Comprehensively.</p>
<p>The first analysis, led by researchers from Yale School of Medicine and colleagues at Coreva-Scientific, Vanderbilt and UT Health San Antonio, pooled data from 30 clinical studies covering more than 430,000 patients. At the 12-month mark, people who had undergone metabolic and bariatric surgery had lost more than 20% more weight than those taking GLP-1 drugs. That gap alone would be noteworthy. But the remission rates for obesity-related disease were, if anything, more striking: type 2 diabetes went into remission 42% more often in surgical patients, hypertension remission was nearly 13 percentage points higher, and elevated cholesterol remitted at rates more than 20 points better than the drug-treated group.</p>
<p>Then there is what happens when patients stop taking the drugs. Which, eventually, many do.</p>
<p>&#8220;Once the medications are discontinued, whether due to side effects, cost or other factors, their benefits often diminish or disappear, whereas the benefits of surgery endure,&#8221; said John Morton, professor of surgery at Yale and a co-author of the analysis. The cost issue is not trivial: GLP-1 drugs can run to over $1,000 a month without insurance coverage, and the requirement is effectively lifelong. The surgery is a one-time intervention. Morton was careful not to dismiss the drugs entirely. &#8220;While GLP-1 medications are an important advance, they do not match the magnitude or durability of outcomes achieved with metabolic and bariatric surgery, which remains one of the most underutilized treatments in medicine.&#8221; Less than 1% of the Americans eligible for bariatric surgery actually get it in any given year.</p>
<h2>The Cardiovascular Case</h2>
<p>The second study, from researchers at UVA Health, looked at a specific population that might seem an unlikely surgical candidate: adults aged 65 and over with both obesity and diabetes. The team drew from Epic&#8217;s Cosmos database, a nationwide repository covering more than 200,000 older patients treated between 2017 and 2025. After carefully matching surgical and non-surgical patients for age, health status and other confounders, they compared five-year outcomes across the two groups. The results were, frankly, not close. Older adults who had bariatric surgery were roughly 16% less likely to suffer a major adverse cardiovascular event (heart attack, stroke, cardiovascular death) compared to those on GLP-1 drugs. Severe kidney disease was about 25% less common. Diabetic retinopathy, the vision-threatening complication of uncontrolled blood sugar, occurred 35% less often.</p>
<p>What made these findings particularly interesting to the investigators was a detail buried in the data: blood sugar control improved similarly in both groups. &#8220;While GLP-1 agonists have transformed the treatment landscape for obesity and diabetes, our findings show metabolic and bariatric surgery delivers even greater protection against serious complications including heart attacks, kidney failure and vision loss,&#8221; said Thomas Shin, the study&#8217;s lead author and an assistant professor of surgery at UVA Health. &#8220;What&#8217;s more, this study showed advanced age alone should not exclude patients from surgery. In fact, older adults may have the most to gain.&#8221;</p>
<p>The glycaemic equivalence matters, because it suggests surgery&#8217;s advantages aren&#8217;t simply about controlling blood sugar better. Something else is happening. The precise mechanisms aren&#8217;t fully understood, but metabolic surgery alters gut anatomy in ways that affect hormones, bile acids, the microbiome and inflammatory pathways, a cascade of effects that seems to go well beyond whatever GLP-1 drugs can replicate pharmacologically. The weight difference in the first year (surgical patients lost 17.3% of body weight; GLP-1 patients lost 4.2%) likely accounts for much of the divergence. But perhaps not all of it.</p>
<h2>Why Aren&#8217;t More People Having the Operation?</h2>
<p>Here is where the data becomes slightly uncomfortable. Bariatric surgery has been available for decades. Its safety profile, according to ASMBS, is comparable to gallbladder removal or knee replacement. Its outcomes, at least by these measures, are extraordinary. And yet fewer than 1 in 100 eligible patients undergoes the procedure each year. Meanwhile, the GLP-1 drugs, newer, less proven over the long term and dependent on continued use,have captured the cultural moment in a way surgery never quite managed.</p>
<p>Part of this is stigma. Surgery feels, to many patients and some clinicians, like a drastic measure, a last resort, a failure of willpower somehow made surgical. The drugs feel gentler, more reversible, more modern. There is something psychologically easier about swallowing a pill than consenting to a procedure that permanently reshapes your digestive tract. John Scott, a clinical professor at the University of South Carolina who was not involved in the Yale analysis, put it plainly: &#8220;GLP-1s have expanded evidence-based treatment options, but they should not be seen as a replacement for surgery, especially for patients who require the level of outcomes that only metabolic and bariatric surgery can provide.&#8221; Richard Peterson, the president of the ASMBS, went further: &#8220;This study reinforces the notion that metabolic and bariatric surgery is not just about weight loss. It&#8217;s a powerful metabolic intervention that can meaningfully change the trajectory of chronic disease in ways no other intervention currently can.&#8221;</p>
<p>Neither study constitutes a randomized controlled trial (the gold standard for medical evidence) and the researchers acknowledge that. Patients who opt for surgery are, in some respects, different from those who take drugs, and observational data can only be adjusted for so many variables. But the consistency of the effect across 30 studies and 430,000-plus patients is hard to dismiss.</p>
<p>There is, perhaps, a version of the future in which the two approaches work together rather than compete: GLP-1 drugs as a bridge for patients not yet ready for surgery, or as a maintenance tool afterward. Several research groups are already exploring that. But the evidence as it stands suggests that for patients who need the most significant and durable change, the operation that has been quietly available for decades may still be medicine&#8217;s best answer to one of its hardest problems.</p>
<p>The wonder drug is good. The surgery, it turns out, is better.</p>
<p>Source: American Society for Metabolic and Bariatric Surgery Annual Meeting, San Antonio, May 2026. Study abstracts: <a href="https://doi.org/10.1016/j.soard.2026.asmbs.4223">Abstract ID 4223</a> (Yale/Coreva analysis) and <a href="https://doi.org/10.1016/j.soard.2026.asmbs.4719">Abstract ID 4719</a> (UVA Health analysis).</p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>If bariatric surgery works so much better, why do so few people have it?</strong></p>
<p>Less than 1% of Americans eligible for bariatric surgery undergo the procedure each year, despite its safety profile being comparable to common operations like gallbladder removal. Stigma plays a significant role: surgery is often perceived as drastic or as a last resort, while GLP-1 drugs feel more accessible and reversible. Cost and access are also factors, though the drugs themselves can exceed $1,000 a month without insurance.</p>
<p><strong>Does stopping GLP-1 drugs mean the weight comes back?</strong></p>
<p>The evidence strongly suggests so. Studies have consistently found that when patients discontinue GLP-1 drugs, due to cost, side effects or supply issues,much of the weight lost tends to return, along with the metabolic improvements. Bariatric surgery, by contrast, produces changes that appear to be durable long-term, because it alters the anatomy and physiology of digestion rather than relying on a drug that must be continuously present.</p>
<p><strong>Why would surgery protect the heart and kidneys even when blood sugar control is the same?</strong></p>
<p>This is one of the more intriguing findings: in the UVA Health study, HbA1c (a measure of long-term blood sugar) improved similarly in surgical and drug-treated patients, yet surgery still produced far better cardiovascular and kidney outcomes. Researchers believe surgery&#8217;s effects on gut hormones, bile acids, the microbiome, and inflammatory pathways may explain the difference; effects that go well beyond glycaemic control alone.</p>
<p><strong>Is it safe to have bariatric surgery when you&#8217;re older?</strong></p>
<p>The UVA Health study specifically examined adults aged 65 and older, a population often assumed to be poorer surgical candidates. The findings suggest the opposite may be true: older adults saw 16% lower cardiovascular event rates, 25% fewer cases of severe kidney disease, and 35% less diabetic retinopathy compared to peers on GLP-1 drugs. The lead researcher concluded that advanced age alone should not rule out surgery and that older patients may, in fact, have the most to gain.</p>
<p><strong>Could GLP-1 drugs and bariatric surgery ever be used together?</strong></p>
<p>Potentially, and several research groups are already investigating this. One plausible model involves using GLP-1 drugs as a preparatory bridge for patients not yet ready for surgery, or as a maintenance tool in the years after an operation. The two approaches target overlapping but distinct biological pathways, which means combining them could, in theory, produce additive benefits, though the evidence for this is still early.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/weight-loss-operation-outperforms-wonder-drug/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575959</post-id>	</item>
		<item>
		<title>Tiny World in Outer Solar System Has an Atmosphere. It Shouldn&#8217;t.</title>
		<link>https://scienceblog.com/tiny-world-in-outer-solar-system-has-an-atmosphere-it-shouldnt/</link>
					<comments>https://scienceblog.com/tiny-world-in-outer-solar-system-has-an-atmosphere-it-shouldnt/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Tue, 05 May 2026 10:13:42 +0000</pubDate>
				<category><![CDATA[Space]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575954</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>On January 10, 2024, a dim star in the constellation Gemini began to disappear. Not dramatically, not all at once, but gradually, its light thinning by degrees as something cold and dark slid in front of it. From three stations across Japan, astronomers watched the light curves on their monitors and saw what they hadn&#8217;t expected to see: a gradual fade where physics said there should be an abrupt wink-out. A fading that could only mean one thing. Whatever had passed in front of that star had an atmosphere.</p>
<p>The object responsible was (612533) 2002 XV93, a lump of ice and rock roughly 500 kilometres across, orbiting the sun out beyond Neptune in a region so frigid that the temperature hovers around 47 degrees above absolute zero. By every standard model of planetary science, something that small and that cold should be utterly airless. Its gravity is too weak, its surface too frozen. And yet.</p>
<h2>A Lucky Experiment in the Dark</h2>
<p>Stellar occultations, as astronomers call them, are natural gifts. When a solar system body passes directly in front of a star, the star&#8217;s light acts as a probe, feeling its way through whatever atmosphere might surround the occulting object. The technique has revealed Pluto&#8217;s nitrogen blanket, discovered rings around distant Centaur objects, and, now, done something nobody anticipated. Ko Arimatsu at the National Astronomical Observatory of Japan&#8217;s Ishigakijima station had organized the observation campaign under the name TABASCO (Trans-Neptunian Atmospheres and Belts Analysis through Stellar-occultation Coordinated Observations, a backronym that suggests perhaps astronomers have earned a little levity). The team operated three stations: a 1.05-metre Schmidt telescope at Kiso Observatory, a compact 20-centimetre setup at Kyoto University&#8217;s rooftop, and, crucially, a 25-centimetre backyard telescope operated by citizen astronomer Katsumasa Hosoi in Fukushima prefecture. High-sensitivity CMOS cameras, the same underlying technology used in smartphone sensors, made all three stations capable of detecting the subtle refractive dimming that a thin atmosphere would cause.</p>
<p>At Kiso, the light curve showed something textbook-troubling: rather than an instantaneous drop as the star slipped behind solid rock, the flux fell gradually over about 1.5 seconds at both entry and exit. Diffraction effects at 37 astronomical units could account for roughly 0.05 seconds of blurring. The star&#8217;s own angular diameter added perhaps 0.004 seconds more. Neither comes close to explaining 1.5 seconds of smoothing. Rings or dust shells were considered and essentially ruled out: the geometry was all wrong, the opacity too high, the dynamical situation too unstable. An atmosphere was the only interpretation that held together.</p>
<p>The derived surface pressure is somewhere between 100 and 200 nanobars depending on what you assume the atmosphere is made of, whether nitrogen, methane, or carbon monoxide. To give that number some context: Pluto&#8217;s own thin atmosphere runs around 10,000 nanobars, about 50 to 100 times denser. Mars, for comparison, sits at around five million nanobars. So this is an extremely tenuous wisp of a thing. But it&#8217;s real. And it&#8217;s roughly 100 times denser than any upper limit previously established for comparable trans-Neptunian objects of similar or even larger size. That&#8217;s not a small discrepancy. That&#8217;s a qualitative surprise.</p>
<h2>The Problem of Where It Came From</h2>
<p>Here&#8217;s the difficulty. The atmosphere can&#8217;t have been there long. At the surface pressure detected, with a Jeans parameter close to 1, meaning the thermal energy of gas molecules nearly matches the gravitational energy holding them down, the hypervolatile gases involved would hydrodynamically stream away into space on a timescale of perhaps 100 to 1,000 years. The Solar System is roughly 4.5 billion years old. Any primordial atmosphere 2002 XV93 might once have held is long gone. Whatever is there now was put there recently, in the cosmological sense at minimum, possibly quite recently in the literal sense.</p>
<p>James Webb Space Telescope observations of the object&#8217;s surface show no sign of frozen methane, nitrogen, or carbon monoxide sitting on the surface ready to sublimate. So the replenishment isn&#8217;t coming from a straightforward surface reservoir baking in what passes for sunlight at 38 astronomical units. Something else is going on.</p>
<p>Two candidate explanations survive scrutiny, though both are, to put it charitably, speculative. The first involves cryovolcanism: the idea that some internal heat source, perhaps radiogenic decay, residual formation energy, or the antifreeze effect of ammonia in a subsurface brine, is pushing volatiles up through the icy shell and into the surrounding space. Larger trans-Neptunian objects like Sedna and Gonggong show suggestive evidence of internal geochemical activity. JWST isotopic analyses of methane ice on the dwarf planets Eris and Makemake indicate the methane isn&#8217;t purely primordial but may have been processed by warm interiors. A 500-kilometre body has a smaller heat budget and a thicker cold lithosphere, making sustained cryovolcanism harder to sustain, but perhaps not impossible under special conditions.</p>
<p>The second possibility is stranger and, in some ways, more appealing precisely because of its strangeness. A comet hit it. A comet-like impactor of just 100 metres or so in radius, carrying sufficient frozen CO, methane, or nitrogen, could have delivered enough gas to account for the observed pressure upon impact. The low relative velocities typical among plutinos, objects in the same 2:3 orbital resonance with Neptune that Pluto occupies, would have helped retain the released gas rather than blasting it clean away. The probability of such an event over a century is tiny, around one in 100,000 by conservative estimates from Pluto&#8217;s crater record. But there are roughly 100 TNO occultation measurements in the literature, and the population of sub-kilometre impactors could be considerably larger than current models allow.</p>
<h2>What Comes Next</h2>
<p>The two scenarios make different predictions, which is the part of science that separates interesting puzzles from permanently mysterious ones. A comet-generated atmosphere should be steadily declining. Monitor 2002 XV93 over the next several years with the same occultation technique and, if the pressure is measurably lower, you have your answer. An endogenous, cryovolcanic source would show no monotonic decline, possibly seasonal fluctuations tied to the object&#8217;s 248-year orbit. Citizen-professional networks like TABASCO are uniquely positioned to run exactly this kind of long-term monitoring, and the participation of Hosoi at Fukushima demonstrates the technique can work with modest equipment in amateur hands.</p>
<p>JWST spectroscopy of the atmosphere itself, if achievable, would give direct molecular composition data. Mid-infrared observations have already worked for Pluto. Whether the telescope&#8217;s scheduling can accommodate such a faint and time-sensitive target is another matter.</p>
<p>What the discovery already does, without waiting for any of that, is break a consensus. The received view held that objects smaller than about 500 kilometres simply couldn&#8217;t host atmospheres over any meaningful timescale. 2002 XV93, at around 500 kilometres diameter, sits right at that nominal limit and blows through it regardless. If a few-hundred-kilometre body can transiently sport a nanobar-scale atmosphere, so perhaps can others in the Kuiper belt&#8217;s population of millions of icy objects. The outer solar system, it turns out, is livelier than we thought. The light curve dipping over Japan in January 2024 was, in a sense, those distant worlds announcing themselves.</p>
<p>Source: <a href="https://doi.org/10.1038/s41550-026-02846-1">Arimatsu et al., <em>Nature Astronomy</em> (2026). doi:10.1038/s41550-026-02846-1</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>How do astronomers detect an atmosphere on something so far away?</strong></p>
<p>When a solar system object passes in front of a background star, any atmosphere bends and dims the starlight before the solid body blocks it completely. By timing how gradually the star fades rather than winking out instantly, researchers can calculate the pressure and even constrain what the atmosphere is made of. It&#8217;s a technique that works even with relatively modest telescopes, which is why citizen astronomer Katsumasa Hosoi was able to contribute usable data from a 25-centimetre backyard telescope in Fukushima.</p>
<p><strong>Why can&#8217;t a small object this far from the sun keep an atmosphere for long?</strong></p>
<p>It comes down to gravity versus heat. At the surface temperature of around 47 degrees above absolute zero, even the most easily vaporized ices, such as nitrogen, methane, and carbon monoxide, have molecules moving fast enough to escape the weak gravitational pull of a 500-kilometre body. The relevant quantity, called the Jeans parameter, comes out close to 1 for 2002 XV93, meaning the atmosphere is barely gravitationally bound and should bleed away into space within 100 to 1,000 years. On a solar-system timescale, that&#8217;s essentially overnight.</p>
<p><strong>Could this mean other small icy bodies in the outer solar system have temporary atmospheres too?</strong></p>
<p>That&#8217;s precisely the implication the researchers flag. The Kuiper belt contains millions of icy objects, many of them in the 100-to-500-kilometre size range. If occasional impacts or bursts of internal activity can briefly supply enough gas to create a measurable atmosphere, then transient atmospheric events could be a fairly regular occurrence across the outer solar system, even if no individual atmosphere lasts long. Whether any of this connects to habitability questions is a much longer conversation, but it does suggest these bodies are geochemically more dynamic than the standard frozen-wasteland picture allows.</p>
<p><strong>Is cryovolcanism really plausible on something this small?</strong></p>
<p>It&#8217;s the harder of the two explanations to make work. Cryovolcanism on larger dwarf planets like Quaoar or Sedna is plausible because they retain enough internal heat and possibly harbor subsurface liquid layers kept fluid by ammonia acting as an antifreeze. A 500-kilometre body has a smaller heat budget, cools faster, and develops a thicker cold shell. The researchers don&#8217;t rule it out, but they note it would require unusual circumstances, perhaps unusually high concentrations of antifreeze compounds or tidal forcing from an unseen satellite. A comet impact is perhaps the cleaner explanation, however low the probability.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/tiny-world-in-outer-solar-system-has-an-atmosphere-it-shouldnt/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575954</post-id>	</item>
		<item>
		<title>Nonprofit Hospitals Have Spent $7.8 Billion on Management Consultants. Nobody Can Find Any Benefit.</title>
		<link>https://scienceblog.com/nonprofit-hospitals-have-spent-7-8-billion-on-management-consultants-nobody-can-find-any-benefit/</link>
					<comments>https://scienceblog.com/nonprofit-hospitals-have-spent-7-8-billion-on-management-consultants-nobody-can-find-any-benefit/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Tue, 05 May 2026 10:10:04 +0000</pubDate>
				<category><![CDATA[Health]]></category>
		<category><![CDATA[Social Sciences]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575952</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>For any hospital chief executive watching the balance sheet erode, the pitch from a management consulting firm must be hard to resist. Here, the partners promise, are people who have seen every operational failure, every cost spiral, every revenue leak in the industry, people who can fix things (in a matter of months, they promise) what internal staff have struggled with for years. The consultants arrive with laptops and frameworks. They run workshops. They deliver decks. They leave with very large cheques. And according to the most rigorous study ever conducted on the practice, they leave hospitals almost exactly as they found them.</p>
<p>A paper published this week in JAMA has done what a remarkable number of people apparently did not think to do: it actually looked at the numbers.</p>
<p>Joseph Dov Bruch, a health policy researcher at the University of Chicago, had a practical reason to want an answer. Students coming through his programme kept asking him whether a career in healthcare management consulting was a meaningful way to improve the system. He found, to his frustration, that he had no good evidence either way. So he and his colleagues went looking. They combed through IRS Form 990 filings, the detailed financial disclosures nonprofits are required to submit each year, and used machine learning to identify hospital contracts with management consulting firms across a twelve-year period. What they found was, depending on your perspective, either entirely unsurprising or quite extraordinary.</p>
<h2>A $7.8 Billion Shrug</h2>
<p>More than one in five American nonprofit hospitals hired a management consultant at some point between 2010 and 2022. Across the sector, the total bill came to at least $7.8 billion over roughly a decade, with the average hospital paying $15.7 million for its engagement. That is money that might otherwise have gone toward patient care, capital improvements, or the community health programmes that nonprofit status is nominally supposed to encourage.</p>
<p>The researchers compared 306 hospitals that initiated their first consulting contract during the study period against 513 carefully matched hospitals that did not, then tracked both groups across a battery of financial, operational, and clinical metrics. Net patient revenue. Operating margins. Days of cash on hand. Inpatient length of stay. Staffing levels. Executive compensation. Thirty-day mortality and readmission rates for heart attacks, pneumonia, and stroke.</p>
<p>Across virtually every measure, the result was the same: nothing. No statistically significant, systematic improvement attributable to the consulting engagement. Operating margins didn&#8217;t meaningfully shift. Revenue didn&#8217;t climb. Hospitals didn&#8217;t become leaner or more efficient. &#8220;It&#8217;s not necessarily a waste,&#8221; Bruch said, &#8220;but we don&#8217;t have evidence of meaningful improvements.&#8221;</p>
<p>The one exception was small and unwelcome: a modest increase in thirty-day readmissions among stroke patients. It was statistically significant, just barely, but the researchers note it was not robust when they tested alternative model specifications, so it is probably noise. Still. Not the direction you would hope for.</p>
<h2>Why the Evidence Gap Lasted This Long</h2>
<p>What makes the study genuinely odd, in retrospect, is how long it took for someone to do it. Management consultants have been a fixture of American healthcare for decades, wielding influence that Bruch&#8217;s paper notes is higher than in almost any other sector of the economy. Hospitals have been handing over billions in tax-subsidized dollars to these firms throughout a period when American healthcare was under intense political scrutiny for its costs and outcomes. And yet, as Bruch&#8217;s team documents, there was no prior large-scale empirical attempt to measure what hospitals actually got in return.</p>
<p>Part of the explanation is practical: the data didn&#8217;t exist in a usable form until someone went digging through IRS filings with machine learning. Part of it may be something a bit more awkward. Management consulting is a diffuse, relationship-dependent industry where firms rarely publicize detailed records of what they recommended or why, and hospitals are disinclined to trumpet the cases where advice didn&#8217;t pan out. The whole arrangement runs on reputation and trust rather than documented outcomes, which is an unusual posture for an industry advising organizations whose core mission is evidence-based medicine.</p>
<p>Bruch is measured in his conclusions. &#8220;This initial analysis suggests that consultants may deliver neither the dramatic efficiencies they promise nor the harms that critics sometimes fear,&#8221; he said. The framing matters: he is not claiming consultants are useless, only that the evidence of usefulness is, so far, absent. It is possible, he acknowledges, that consulting engagements affect things the study couldn&#8217;t capture, or that benefits take longer to materialize than the study window allows, or that some hospitals gain and others lose in ways that cancel out in aggregate. But the null result, across this many hospitals and this many metrics, is hard to dismiss.</p>
<h2>What the Numbers Don&#8217;t Capture</h2>
<p>The paper&#8217;s scope is also deliberately narrow: it looked only at management consultants, defined specifically, not the broader ecosystem of external expertise that hospitals buy. When the researchers widened the definition to include HR and IT consultants, total spending by nonprofit hospitals over the study period climbed past $25 billion. That figure raises the obvious question of whether similar analyses of those adjacent industries would yield similar shrugs, and Bruch thinks it probably should prompt someone to look.</p>
<p>There is also the question of what &#8220;meaningful improvement&#8221; would even look like in a system as complex as an American hospital. Consulting firms typically frame their value in terms of strategic alignment, organizational culture change, and positioning for future growth: outcomes almost definitionally resistant to measurement in Bruch&#8217;s study timeframe. Whether that resistance to measurement is a genuine feature of complex organizational change, or a convenient property for an industry whose outputs are hard to audit, is a question the data cannot answer.</p>
<p>What the data can say is that $7.8 billion bought no detectable improvement in any metric that hospital administrators, policymakers, and patients would normally care about. For Bruch, the more immediate hope is that the finding changes some individual calculations. His students, the ones considering consulting careers, kept asking whether the work could actually move the needle on healthcare&#8217;s deep inefficiencies. &#8220;Answering those questions has been difficult because the evidence has been so limited,&#8221; he said. Now the evidence exists. The answer is a little more specific, and a little less comfortable.</p>
<p>DOI / Source: <a href="https://doi.org/10.1001/jama.2026.5027">10.1001/jama.2026.5027</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Why haven&#8217;t hospitals been tracking whether consultants actually help?</strong></p>
<p>Partly because the data required to do so wasn&#8217;t easily accessible until researchers started systematically mining IRS financial filings with machine learning. But there&#8217;s also a structural problem: consulting firms don&#8217;t publish outcome data, and hospitals that receive poor advice have little incentive to advertise that fact. The entire industry has operated on reputation rather than documented results, which is a curious arrangement for organizations meant to practice evidence-based medicine.</p>
<p><strong>Is it possible the benefits just take longer to show up?</strong></p>
<p>That&#8217;s one of the study&#8217;s acknowledged limitations. The researchers tracked hospitals for several years after their consulting engagement began, but some organizational changes take a decade or more to feed through into measurable outcomes. What the study can say is that no detectable benefit appeared across a wide range of financial, operational, and clinical metrics in the timeframe examined. Whether patience would eventually be rewarded remains an open question.</p>
<p><strong>Could some hospitals benefit while others lose, and the effects cancel out?</strong></p>
<p>Possibly, yes. The study measures average effects across hundreds of hospitals, and it&#8217;s conceivable that some engagements are highly effective while others are counterproductive, leaving aggregate results near zero. Breaking down which hospital types or which consulting firms might produce better outcomes is exactly the kind of follow-on research the authors say is needed, though it would require the consulting industry to share far more data than it currently does.</p>
<p><strong>Does this mean consultants are a bad idea for all nonprofits, not just hospitals?</strong></p>
<p>This study looked specifically at hospitals, which operate in a particularly regulated and outcome-tracked environment. Generalizing to other nonprofit sectors would require separate research. What the findings do suggest is that the default assumption that external expertise reliably translates into measurable improvement may deserve more scrutiny across sectors than it currently receives.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/nonprofit-hospitals-have-spent-7-8-billion-on-management-consultants-nobody-can-find-any-benefit/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575952</post-id>	</item>
		<item>
		<title>Seaweed Could Solve One of Aquaculture&#8217;s Biggest Pollution Problems</title>
		<link>https://scienceblog.com/seaweed-could-solve-one-of-aquacultures-biggest-pollution-problems/</link>
					<comments>https://scienceblog.com/seaweed-could-solve-one-of-aquacultures-biggest-pollution-problems/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Tue, 05 May 2026 10:02:48 +0000</pubDate>
				<category><![CDATA[Earth, Energy & Environment]]></category>
		<category><![CDATA[Life & Non-humans]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575949</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>Every yellowtail snapper excretes ammonia. It cannot help doing so; it is a metabolic inevitability, the nitrogen-rich byproduct of a fish eating, breathing, living at commercial density in a tank on Virginia Key, Florida. In a conventional aquaculture operation, that ammonia accumulates in the effluent water until it becomes a problem, sometimes a serious one, for the marine environment downstream. What researchers at the University of Miami have spent the past several years working out is whether the right seaweed, given the right conditions, can simply eat the problem before it leaves the building.</p>
<p>The answer, it turns out, is yes, though which seaweed you choose matters quite a lot, and the reasons why are more interesting than a simple efficiency metric might suggest.</p>
<p>The concept behind integrated multi-trophic aquaculture, or IMTA, is straightforward enough: instead of treating fish effluent as waste, you pipe it through tanks of seaweed that can use the nutrients for growth. What makes the approach compelling is what it produces on the other end. Haley Lasco, then a marine biology graduate student at the Rosenstiel School, spent two-week trials monitoring four native seaweed species as they processed effluent from snapper held at commercial stocking density, 26 kilograms per cubic metre. Three of the four species reduced ammonia nitrogen in the outflowing water to below detectable limits. One of them, a red alga called Agardhiella subulata, achieved complete ammonia removal in eight days, once its biomass reached roughly 6.7 kilograms per cubic metre.</p>
<p>That is not just a pollution statistic. It is also a harvestable crop, enriched in protein by the very nutrients it has extracted from the fish upstream.</p>
<h2>Each Seaweed Does Something Different</h2>
<p>The four species tested were chosen partly for regional relevance (all are native to the Southeast US and Caribbean) and partly because earlier market research had flagged their commercial promise. Agardhiella subulata and the green alga Ulva lactuca both cleared ammonia to undetectable levels and grew substantially over the trial period. Agardhiella finished with nearly 11.5 kilograms per cubic metre; Ulva underwent what the paper describes as exponential growth, increasing its wet weight by more than 700 percent in three days during the second week. A second red alga, Gracilaria caudata, reduced ammonia by 82 percent before plateauing. Caulerpa racemosa, the fourth candidate, struggled, apparently because wild-caught specimens hadn&#8217;t acclimated long enough to their new system before the trial began.</p>
<p>The nutritional profiles of each harvested crop diverge in ways that arguably matter as much as the cleanup performance. Caulerpa came out with the highest protein content (around 25 percent of dry weight), and the most omega-3 fatty acids of any species tested. Ulva, despite its explosive growth, had the lowest protein but the highest carbohydrates and the strongest affinity for carbon uptake, a property with potential relevance to carbon sequestration rather than just food production. Agardhiella landed somewhere between the two in most categories, while also harbouring the highest combined omega-3 and omega-6 profile: the researchers calculate that roughly 45 grams dry weight provides the equivalent of a standard fish oil supplement capsule.</p>
<h2>The Waste That Seaweed Couldn&#8217;t Touch</h2>
<p>Not everything dissolved in fish effluent water yields so neatly to seaweed remediation. Phosphate, it turns out, is a more stubborn problem. None of the four species drove phosphate concentrations below detectable limits, and the statistical analyses found no significant phosphate reduction relative to the incoming effluent in any tank. The likely explanation is that once the seaweed had stripped available nitrogen from the water, growth became nitrogen-limited, which meant phosphate uptake stalled too. Caulerpa actually showed the lowest outflow phosphate concentrations, possibly reflecting its relatively high phosphate requirements when growing at optimal rates, though since it wasn&#8217;t growing optimally in this trial, that interpretation is tentative. The phosphate gap represents an open engineering problem for IMTA systems: clearing nitrogen while leaving phosphate behind is progress, not a solution.</p>
<p>There were some quieter findings that deserve attention on their own terms. All four species raised pH and lowered dissolved carbon dioxide in the water passing through their tanks, a result of photosynthesis consuming CO2 alongside ammonia. Ulva and Gracilaria both showed carbon-to-nitrogen ratios in their harvested tissue suggesting they incorporate carbon efficiently relative to their nitrogen uptake, which has prompted speculation about whether either species could be cultivated specifically for carbon sequestration, perhaps by sinking the biomass into deep water as some researchers have proposed. Whether that proves commercially viable is another question.</p>
<p>&#8220;This work shows how integrating macroalgae into marine finfish aquaculture systems can reduce waste while producing a valuable secondary crop,&#8221; said John Stieglitz, who led the project as principal investigator. &#8220;It provides a practical framework for selecting species based on specific production goals, improving environmental performance while creating opportunities for better production economics and more diversified products using an IMTA approach.&#8221;</p>
<p>The metal content data introduces some nuance. Caulerpa accumulated the highest levels of arsenic, lead, and iron of any species tested, with lead at 0.7 parts per million dry weight, above the FDA&#8217;s 0.05 ppm guideline for food. The researchers note that Caulerpa in this study was wild-caught and may not reflect the profile of seaweed cultured in captivity over longer periods. Mercury and cadmium across all four species remained well within safety thresholds.</p>
<p>&#8220;With the significant interest in the development of marine aquaculture throughout the Southeast U.S. and Caribbean, these findings can be used to guide the selection of extractive macroalgae species in operations culturing marine finfish,&#8221; said Lasco, who is now a scientist at the South Carolina Department of Natural Resources. The guidance amounts to a decision tree, essentially: if your priority is nitrogen removal, go with Agardhiella or Ulva; if you want the highest-protein secondary crop, Caulerpa wins on that metric but needs longer acclimation and more careful light management; if carbon sequestration interests you alongside bioremediation, Ulva and Gracilaria both merit closer attention.</p>
<p>What the study doesn&#8217;t resolve, and makes no claim to, is whether this scales. The pilot system at the Rosenstiel School&#8217;s Experimental Hatchery is just that, a pilot, and the regulatory landscape for marine aquaculture in US waters remains genuinely complicated. But the basic case, that seaweed can eat what fish produce and emerge from the process as something worth selling, has now been made with enough detail that producers in the region have a species-by-species guide to act on.</p>
<p>Source: <a href="https://doi.org/10.1007/s10499-026-02441-1">Lasco et al., <em>Aquaculture International</em>, 2026. DOI: 10.1007/s10499-026-02441-1</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Can seaweed actually eliminate fish farm pollution, or just reduce it?</strong></p>
<p>For ammonia nitrogen, three of the four seaweed species tested in this study reduced concentrations in fish effluent water to below detectable limits within 8 to 11 days of reaching sufficient biomass. Phosphate is a different story: none of the species tested made a statistically significant dent in phosphate levels, likely because nitrogen ran out before phosphate could be fully consumed. So the honest answer is: it depends on which pollutant you&#8217;re measuring.</p>
<p><strong>Is seaweed grown in fish effluent actually safe to eat?</strong></p>
<p>Largely yes, though with caveats. All four species tested had mercury and cadmium levels well within FDA safety thresholds. The concern is arsenic and lead, particularly in Caulerpa racemosa, which showed lead levels above FDA food guidelines. The researchers note this may reflect the wild-caught origin of that particular batch rather than the species in general, and that extended captive culture could change the picture. Decisions about food use would need species-specific and site-specific metal testing.</p>
<p><strong>Why would you grow seaweed alongside fish instead of just treating the water chemically?</strong></p>
<p>Chemical treatment removes waste but produces nothing. Growing seaweed alongside fish turns that waste into a harvestable crop, whether for human food, animal feed, or nutritional supplements. In this study, seaweed cultured in fish effluent had notably higher protein content than wild counterparts of the same species, suggesting the nutrient-rich effluent actively fortifies the crop. The economic logic is that you&#8217;re not just managing a pollution problem; you&#8217;re farming a second product from inputs you&#8217;d otherwise be paying to neutralise.</p>
<p><strong>Could this type of fish-seaweed farming help with climate change?</strong></p>
<p>Potentially, in a few ways. The seaweed in this study consistently lowered dissolved carbon dioxide in water passing through the tanks, a direct result of photosynthesis. Two species, Ulva lactuca and Gracilaria caudata, showed particularly high ratios of carbon to nitrogen in their harvested tissue, meaning they incorporated carbon efficiently relative to other nutrients. Some researchers have proposed sinking seaweed biomass into the deep ocean as a long-term carbon storage strategy, and the findings here suggest both species could be worth investigating for that purpose.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/seaweed-could-solve-one-of-aquacultures-biggest-pollution-problems/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575949</post-id>	</item>
		<item>
		<title>Ocean Current That Regulates Our Climate Has Been Weakening for Two Decades</title>
		<link>https://scienceblog.com/ocean-current-that-regulates-our-climate-has-been-weakening-for-two-decades/</link>
					<comments>https://scienceblog.com/ocean-current-that-regulates-our-climate-has-been-weakening-for-two-decades/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Tue, 05 May 2026 09:58:39 +0000</pubDate>
				<category><![CDATA[Earth, Energy & Environment]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575947</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>Sitting on the ocean floor, anchored to the continental slope of North America, a cluster of instruments has been recording something that nobody particularly wanted to see. Pressure sensors and current meters, deployed at depths where no light reaches, have logged the slow behavior of water moving in the dark for two decades. The data they&#8217;ve accumulated now points, with unusual consistency, to a change in one of the most consequential circulatory systems on Earth.</p>
<p>New research from the University of Miami Rosenstiel School of Marine, Atmospheric and Earth Science offers what scientists are calling some of the clearest direct observational evidence yet that the Atlantic Meridional Overturning Circulation, the vast conveyor-belt current that helps regulate climate across the entire North Atlantic basin, has been weakening since roughly the turn of the millennium.</p>
<p>The AMOC is, to put it plainly, not a single current. It&#8217;s an overarching system of ocean movement: warm surface water flows northward from the tropics toward Greenland and Europe, releases heat into the atmosphere (giving Northwestern Europe its unusually mild winters), cools and densifies, then sinks into the deep ocean and flows back southward as cold, dense water thousands of meters below the surface. The whole cycle drives heat redistribution on a planetary scale. Slow it down, and the consequences ripple outward in ways that climate scientists have spent decades trying to model with accuracy.</p>
<p>What&#8217;s made this particularly hard to pin down is the observational challenge. Measuring something as vast and diffuse as the AMOC is genuinely difficult, and until recently, the monitoring arrays scattered across the Atlantic had been doing it slightly differently from each other, making direct comparison something of a methodological headache.</p>
<p>The new study, published in <em>Science Advances</em>, took a different approach. Lead researcher Qianjiang Xing and physical oceanographer Shane Elipot applied the same analytical method consistently across data from four mooring arrays positioned along the western boundary of the North Atlantic, spanning from 16.5 degrees north (near the Caribbean) up to 42.5 degrees north (around Nova Scotia). Each array uses seafloor-anchored instruments to measure pressure, temperature, density and currents. By focusing specifically on changes in bottom pressure to estimate deep ocean flow below around 1,000 meters, the team could finally compare like with like across latitudes.</p>
<h2>Reading the Signal</h2>
<p>The result was a meridionally consistent decline. Across all four sites, across the better part of two decades, the deep western overturning transport showed the same direction of change. The strongest signal came from the southernmost array, the MOVE array at 16.5 degrees north, where the transport decline ran at a statistically significant rate of 0.67 Sverdrups per year between 2000 and 2022. (A Sverdrup, for reference, is roughly a million cubic meters of water per second, so these are not trivial numbers.) The RAPID-MOCHA array at 26.5 degrees north showed a significant declining trend of 0.26 Sverdrups per year over roughly the same period. The trend at the third site, Line W near 39.5 degrees north, was also significant. Only the northernmost array fell short of statistical significance, though its trend pointed the same direction.</p>
<p>The geographic breadth matters. A slowdown detected at a single latitude could be local, temporary, noise. But finding the same signal consistently from the subtropics all the way to the subpolar region, using the same methodology, suggests something is happening basin-wide. &#8220;A weaker AMOC can shift weather patterns, potentially leading to more extreme storms, changes in rainfall, or colder winters in some regions,&#8221; said Elipot. &#8220;It can also influence sea-level rise along coastlines, affecting communities and infrastructure.&#8221;</p>
<p>One subtlety the paper takes care to acknowledge: the western boundary measurements capture only part of the picture. The full AMOC strength also depends on what&#8217;s happening along the eastern boundary, near Europe and Africa, and the team found that the eastern side appears to be showing a partial compensating strengthening. The signal from the west is decline; the signal from the east is, to some degree, offsetting it. The total AMOC, as measured by the long-running RAPID program, has been declining more slowly than the western boundary measurements alone would suggest. This isn&#8217;t a contradiction, exactly. It&#8217;s more that the western boundary, where dynamical changes tend to show up first, is being read as an early warning instrument while the system&#8217;s full trajectory remains somewhat unresolved.</p>
<h2>Canary in the Current</h2>
<p>Which brings the researchers to a proposal that has a certain practical elegance. Western boundary measurements, they argue, could serve as a cost-efficient, real-time early warning signal for AMOC behavior across the entire basin. The canary in the coal mine, Elipot&#8217;s team calls it, quite explicitly. If the western boundary is where anomalies from high-latitude forcing arrive first, propagated southward by coastally trapped waves, then keeping close watch on the western slope might give scientists their earliest read on what the full circulation is doing months or years before other signals emerge.</p>
<p>There&#8217;s an obvious context here that the paper doesn&#8217;t shy away from. Climate models have been predicting AMOC weakening as greenhouse gas concentrations rise for decades. The multimodel ensemble from the Coupled Model Intercomparison Project Phase 6 suggests a decline of roughly 7.6 Sverdrups per century since 1985. The observed rates in this new study are, if anything, faster than those model predictions, at least at the western boundary. Whether that discrepancy reflects genuine underestimation by models, the partial nature of western-boundary-only measurements, or natural variability superimposed on a trend is not yet settled.</p>
<p>What does seem clearer than before is that the signal is real, it&#8217;s coherent across latitudes, and it&#8217;s been accumulating for roughly twenty years. &#8220;This research helps scientists better predict how the climate may change in the coming decades,&#8221; Elipot said, &#8220;information that governments, businesses, and communities use to prepare for future environmental conditions.&#8221; The instruments on the seafloor will keep recording. Whether the trend they&#8217;re logging turns out to be the opening chapter of something more consequential, or a chapter with a more complicated resolution, depends partly on what happens to those eastern boundary dynamics, and partly on questions the ocean has not yet answered.</p>
<p>Source: <a href="https://doi.org/10.1126/sciadv.adz7738">Xing et al., &#8220;Meridionally consistent decline in the observed western boundary contribution to the Atlantic Meridional Overturning Circulation,&#8221; <em>Science Advances</em>, 8 April 2026</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Is the AMOC actually collapsing, or is this just a slowdown?</strong></p>
<p>The new evidence shows a consistent weakening trend over two decades, not a collapse. The current is still operating, but the deep overturning transport along the western boundary has been declining at rates faster than most climate models predicted. Whether this represents the beginning of a more dramatic shift or a trend that stabilizes depends on dynamics that scientists are still working to resolve, particularly how changes at the ocean&#8217;s eastern boundary interact with what&#8217;s happening in the west.</p>
<p><strong>Why does the AMOC matter for weather in Europe and North America?</strong></p>
<p>The AMOC acts as a massive heat pump, carrying warm tropical water northward and releasing that warmth into the atmosphere over the North Atlantic. Northwestern Europe&#8217;s relatively mild winters exist largely because of this heat transfer. A weaker circulation means less heat delivered northward, which can shift storm tracks, alter rainfall patterns, and push toward colder winters in some regions. The effects aren&#8217;t limited to Europe: changes in the current also influence hurricane activity and sea-level rise along North American coastlines.</p>
<p><strong>How confident can scientists be in these measurements?</strong></p>
<p>More confident than before. One of the longstanding problems with AMOC monitoring has been that the various ocean arrays used different methods, making comparison difficult. This study applied the same analytical approach across four separate monitoring sites spanning 26 degrees of latitude, and found the same direction of change at all of them. Three of the four sites showed statistically significant declining trends. The consistency across methods and locations is what distinguishes this from earlier, noisier observational records.</p>
<p><strong>Could the eastern boundary changes cancel out the slowdown in the west?</strong></p>
<p>Partially, but not entirely. The study found that the eastern boundary of the Atlantic appears to be showing a compensating strengthening, which means the total AMOC is declining more slowly than the western measurements alone would suggest. But the researchers found the eastern trend does not fully offset the western decline, so the net overturning circulation is still weakening. Understanding the dynamics behind this east-west opposition is one of the open questions the paper flags for future work.</p>
<p><strong>What would it take to actually stop the AMOC?</strong></p>
<p>That remains one of the most contested questions in climate science. Models disagree significantly on whether the AMOC could cross a tipping point into a substantially weakened or collapsed state, and if so, at what level of warming or freshwater input from melting ice. What this new research adds is clearer observational evidence that a measurable decline has already been underway for roughly twenty years, giving researchers a more grounded baseline against which to test those model projections.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/ocean-current-that-regulates-our-climate-has-been-weakening-for-two-decades/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575947</post-id>	</item>
		<item>
		<title>New Method Reads a Cell&#8217;s Genes Without Killing It</title>
		<link>https://scienceblog.com/new-method-reads-a-cells-genes-without-killing-it/</link>
					<comments>https://scienceblog.com/new-method-reads-a-cells-genes-without-killing-it/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Tue, 05 May 2026 09:54:14 +0000</pubDate>
				<category><![CDATA[Life & Non-humans]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575944</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>Every few minutes, a living cell does something quietly extraordinary: it sheds tiny membrane bubbles into the fluid around it. These extracellular vesicles carry molecular cargo (proteins, lipids, fragments of RNA) and cells have been doing this for so long that biologists spent decades trying to understand what the bubbles were for. A team at the Technical University of Munich has now turned that ancient cellular habit into something useful. They have engineered a way to load those bubbles with messenger RNA on demand, collect them from the culture dish, and read out which genes are active in the cell. The cell, meanwhile, keeps living.</p>
<p>The trick sounds simple enough. The implications are rather less so.</p>
<p>Until now, reading a cell&#8217;s transcriptome (the full complement of messenger RNAs it is currently producing, which tells you which genes are switched on) meant destroying the cell to get at the contents. You lysed it, extracted the RNA, sequenced it, and that was that. The cell was gone. If you wanted to watch a stem cell transform into a heart muscle cell over the course of a week, you had to sacrifice a fresh batch of cells at each time point and hope the different batches were comparable. It was a bit like trying to understand how a caterpillar becomes a butterfly by dissecting a different caterpillar each morning.</p>
<h2>Virus-Like Particles as Molecular Postmen</h2>
<p>The Munich team, led by neurobiological engineer Gil Westmeyer, solved this by repurposing machinery from HIV. The virus uses a protein called Gag to bud new particles out through the cell membrane: it is how HIV replicates itself, assembling copies and pinching them off into the bloodstream. Westmeyer&#8217;s group stripped the dangerous parts out, leaving just the budding mechanism, and fused it to a fragment of a protein that naturally grips messenger RNA by its poly(A) tail, the string of adenosine nucleotides that caps the end of most mammalian transcripts. When the modified Gag assembles at the cell membrane and buds off a vesicle, it drags RNA along for the ride. The resulting particles, around 65 nanometres across and not unlike natural extracellular vesicles in appearance, accumulate in the culture medium above the cells. Researchers collect the supernatant, crack open the vesicles, and sequence what is inside.</p>
<p>The method, which the team calls NTVE (non-destructive transcriptomics via vesicular export), showed high concordance with conventional lysis-based RNA sequencing: a Pearson correlation of 0.95 across roughly 14,500 detected genes. Mitochondrial transcripts, which should not be accessible from inside mitochondria to a budding mechanism at the outer cell membrane, were strongly depleted in the exported fraction. That depletion mattered: it confirmed the cell&#8217;s membranes were staying intact, that the RNA was being actively packaged and exported rather than leaking out of damaged cells.</p>
<p>&#8220;This method provides biomedical research with a powerful new tool,&#8221; Westmeyer said. &#8220;We will gain day-by-day insights into the maturation and functionality of stem cells. This could make future cell therapies more precise and effective.&#8221;</p>
<h2>Watching a Heart Cell Being Born</h2>
<p>To demonstrate what that daily access actually looks like, the team differentiated human induced pluripotent stem cells into beating cardiomyocytes over nine days, sampling the transcriptome from the supernatant each morning without touching the cells below. By day six, the cells had started contracting visibly (roughly once per second, at about 1 Hz, the same resting rate as a human heart). The gene expression data told the molecular story leading up to that moment: a cascade of cardiac-specific transcripts rising in sequence, each wave activating the next, in a pattern the researchers could now observe continuously rather than reconstruct from snapshots. Standard markers previously used to identify the three embryonic germ layers, it turned out, were less informative than the time-resolved profiles suggested. NTVE flagged better candidates, genes with expression patterns that more cleanly distinguished ectoderm from mesoderm from endoderm across the full time course.</p>
<p>The system also works in primary neurons, which are notoriously difficult to transfect, so the team delivered the NTVE machinery via adeno-associated virus instead, then treated the neurons with a drug that activates a major signaling pathway. The exported transcriptome correctly captured the expected gene expression changes, including upregulation of Bdnf and several other CREB pathway targets, without any need to harvest the cells.</p>
<p>There are real limitations. NTVE cannot currently reach nuclear-localized transcripts, which include many non-coding RNAs; the RNA stays cytoplasmic before it gets packaged. Single-cell resolution is not yet possible either: the exported transcripts cannot be traced back to individual cells within a mixed population, though affinity tags on the vesicle surface can at least separate transcriptomes from two different co-cultured cell types. And the method requires introducing the Gag-based machinery into cells via lentivirus or transposon, which adds complexity, particularly in primary tissue.</p>
<h2>Beyond Simple Monitoring</h2>
<p>Armbrust and Truong noted that the approach opens territory beyond simple observation. &#8220;Our new method also makes it possible to genetically prepare cells for implantation into tissue,&#8221; they said. &#8220;In addition, NTVE can potentially be used for long-term analysis of organoids as well as for further research into tumors and their intercellular communication.&#8221; That last point hints at something the paper also demonstrates: the vesicle system can be run in reverse, pseudotyped with glycoproteins to fuse with specific target cells and deliver mRNA or even CRISPR editing machinery from sender to receiver cells in co-culture. The same particle that reads gene expression can, with modification, write into the genome of a neighboring cell.</p>
<p>Organoids are perhaps where the stakes are highest. Three-dimensional organ models grown from stem cells are inherently variable; each organoid is its own experiment, and comparing one sacrificed on day three with another sacrificed on day seven introduces noise that is very hard to control for. A method that samples the transcriptome of the same organoid repeatedly, using it as its own longitudinal baseline, could change how drug testing and disease modeling in these systems actually work. Whether NTVE scales into three-dimensional tissue with sufficient efficiency remains to be seen, but the Munich group is already thinking about integrating it with microfluidic systems that could cycle nutrients and collect vesicles continuously, closing the loop on real-time transcriptomic feedback during differentiation.</p>
<p>The cell, it turns out, was always willing to tell us what it was doing. We just needed to stop killing it long enough to listen.</p>
<p><em>DOI: <a href="https://doi.org/10.1038/s41467-026-72072-w">10.1038/s41467-026-72072-w</a></em></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Why did scientists have to destroy cells just to read their gene activity?</strong></p>
<p>Measuring which genes a cell is actively using requires accessing its internal messenger RNA, and the standard approach involves breaking the cell open entirely, a process called lysis. Once lysed, the cell is gone, making repeated measurements on the same cell impossible. NTVE sidesteps this by coaxing the cell to export RNA-carrying vesicles into the surrounding fluid, where they can be collected without touching the cell itself.</p>
<p><strong>How does repurposing HIV machinery make this work?</strong></p>
<p>HIV uses a protein called Gag to bud new virus particles out through the host cell&#8217;s membrane. The Munich team stripped away the infectious components and fused the budding machinery to a fragment that grabs messenger RNA by its poly(A) tail. The result is a cellular assembly line that packages RNA into tiny membrane bubbles and pinches them off, harmlessly, into the culture medium above.</p>
<p><strong>Could this method be used to track cancer cells without harming them?</strong></p>
<p>That is one of the explicit goals. Because NTVE vesicles carry a snapshot of which genes are active at the moment of export, monitoring the exported transcriptomes of tumor cells over time could reveal how they respond to drugs, acquire resistance, or communicate with surrounding tissue, all without disturbing the tumor model. The researchers also note that the vesicle system can be equipped to study intercellular communication, which is particularly relevant to how cancers recruit neighboring cells to support their growth.</p>
<p><strong>What is stopping NTVE from being used in whole organs or living animals?</strong></p>
<p>Currently, the system requires delivering the Gag-based export machinery into cells, which adds a genetic engineering step that is more manageable in cell culture than in complex tissue. The method also cannot yet reach transcripts that stay inside the nucleus, and it cannot assign exported RNA to individual cells within a mixed population. Extending NTVE into three-dimensional organoids and eventually into ex vivo tissue is the next target, but each step adds engineering challenges around delivery efficiency and vesicle collection.</p>
<p><strong>Is the gene activity data from NTVE as reliable as conventional sequencing?</strong></p>
<p>Across roughly 14,500 detected genes, the exported transcriptomes showed a Pearson correlation of 0.95 with standard lysis-based RNA sequencing, a level of agreement that held across experiments using different cell types and different perturbations. Some transcripts, particularly longer ones, showed slight differences in coverage profiles, and nuclear RNA is not accessible at all. But for cytoplasmic messenger RNA, the method captures the gene expression landscape with fidelity that the researchers describe as competitive with existing approaches and, in some respects, superior to them.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/new-method-reads-a-cells-genes-without-killing-it/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575944</post-id>	</item>
		<item>
		<title>Sharing Less Data Could Make AI Models Smarter and Greener</title>
		<link>https://scienceblog.com/sharing-less-data-could-make-ai-models-smarter-and-greener/</link>
					<comments>https://scienceblog.com/sharing-less-data-could-make-ai-models-smarter-and-greener/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Tue, 05 May 2026 09:37:33 +0000</pubDate>
				<category><![CDATA[Earth, Energy & Environment]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575940</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>One tenth of one percent. That is all it takes. Out of billions of parameters inside a large language model, only a tiny, specially chosen sliver, roughly 0.1%, carries the information that actually matters when the model needs to learn something new. The rest is, in a sense, dead weight during the update. Recognising this has led a team of researchers at Stevens Institute of Technology to an algorithm they call MEERKAT, which might, perhaps more quietly than most AI breakthroughs, change how these models are trained at scale.</p>
<p>The problem MEERKAT addresses is unglamorous but genuinely consequential. Federated learning is the technique that lets many different institutions or devices collaborate on training a shared AI model without anyone having to hand over their raw data. Hospitals can pool medical knowledge; schools can improve tutoring software; research groups can share insights across borders, all without sharing the patient records or student files that make those insights possible. The catch is that making it work requires the participants to constantly synchronise their versions of the model, and that synchronisation is, currently, a bandwidth nightmare.</p>
<p>&#8220;It&#8217;s too much data to share,&#8221; says Yide Ran, the PhD candidate who drove the project at Stevens. &#8220;It&#8217;s like sending in an entire encyclopedia when you only need to change a few entries. But you really don&#8217;t need to do that.&#8221; Standard federated learning transmits the entire model, billions of parameters, every time collaborators need to sync. Those transmissions run into gigabytes. Because of the cost, synchronisation happens infrequently, which means the collaborating models drift apart between updates, each pulling in directions shaped by its own local data. The technical term for this is Non-IID drift, and it degrades the final model&#8217;s quality in ways that can be surprisingly hard to recover from.</p>
<h2>The 0.1% That Does the Work</h2>
<p>MEERKAT&#8217;s core insight is that not all parameters are equally worth updating. The team identified, using gradients from the model&#8217;s pre-training phase, which tiny fraction of parameters are most sensitive to loss, most likely to shift meaningfully in response to new data. These sensitive parameters turned out to be highly concentrated: the top 0.1% had average squared gradients roughly 52 times larger than the next tier. Focus updates on those, ignore the rest, and you can shrink the transmitted update from gigabytes to megabytes.</p>
<p>&#8220;So you are no longer sending the entire encyclopedia when only a few key definitions have changed,&#8221; says Denghui Zhang, an assistant professor in information systems and analytics at Stevens who advised on the project. His co-advisor, Zhaozhuo Xu, an assistant professor of computer science, points to the downstream benefit: &#8220;Because updates are so tiny, data can be now sent back and forth more often. The result is a much better shared model.&#8221;</p>
<p>The communication reduction is over 1,000-fold in some configurations. Updates that previously consumed gigabytes of bandwidth can now be transmitted as a few megabytes. But the efficiency gain is only half the story. The other half involves backpropagation, the standard mathematical process AI uses to correct its own errors during training, which requires the model to run calculations backward through its entire network, caching intermediate values along the way. Memory-hungry, energy-intensive work. MEERKAT bypasses it entirely by using zeroth-order optimisation: instead of computing gradients analytically, it simply nudges the model parameters slightly in one direction, checks whether performance improved, then uses that comparison to guide the next step. No backward pass. No gradient caching. Considerably less energy.</p>
<p>There is a subtlety here that took some working through. Zeroth-order methods are generally less precise than backpropagation; they estimate rather than calculate. Applied naively to a full model, that imprecision can destabilise training. But applied to an extremely sparse, carefully chosen subset of parameters? The researchers found, perhaps counter-intuitively, that it actually outperforms full-parameter zeroth-order approaches on most tasks. Something about the concentrated sensitivity of the chosen parameters makes the rough estimation good enough and then some. The paper, published at the International Conference on Learning Representations, tested this across three different language models (LLaMA-3.2-1B, Qwen2-1.5b, and Gemma2-2b) and seven benchmarks, and MEERKAT beat the standard approaches in the large majority of conditions.</p>
<h2>Reading the Gradient Tea Leaves</h2>
<p>The team also tackled the drift problem directly with what they call MEERKAT-VP. When collaborating devices have very different data, some might be trained almost entirely on one type of input while others see a balanced spread; those outliers pull the shared model in misleading directions. MEERKAT-VP uses something called a virtual path to track how each participant&#8217;s model evolves during local training, without ever accessing the underlying data itself. From these paths, the server can compute a metric called GradIP, which measures how a client&#8217;s estimated gradients align with the original pre-training gradients. Clients with highly skewed data turn out to produce GradIP scores that steadily decay toward zero; clients with balanced data oscillate. The pattern is distinctive enough to be used as a diagnostic. Those with extreme skew are then given reduced influence in the next synchronisation round, their local training limited to a single step, which substantially improves overall model quality.</p>
<p>Not all of this is settled. The approach relies on the assumption that the most sensitive parameters identified during pre-training remain the most relevant parameters for downstream fine-tuning, and while the experiments support this, the mechanism is not fully understood. There&#8217;s also the question of whether the GradIP diagnostic holds up across much larger and more heterogeneous networks; the experiments used ten clients, which is a manageable number, but real federated deployments sometimes involve thousands of participants with wildly varying computational resources.</p>
<p>Still, the implications are worth taking seriously. AI training has a substantial and growing energy footprint, and most public attention goes to the enormous centralised data centres at the top of that footprint. Federated learning represents a different model, one that distributes training across many smaller devices, and could in principle be far more efficient, but only if the communication overhead can be brought under control. MEERKAT pushes meaningfully in that direction. For resource-constrained institutions, the kind that cannot afford to upload and download gigabytes of model data repeatedly, a thousandfold reduction in communication cost is the difference between participation and exclusion. Healthcare systems in countries with limited bandwidth, educational platforms serving schools with modest infrastructure, cross-border research collaborations that cannot share patient data across jurisdictions, all of these stand to benefit if the approach proves out at scale.</p>
<p>Whether MEERKAT travels the usual path from conference paper to widespread adoption depends on how well it generalises beyond the benchmarks tested so far. But the underlying observation, that a vanishingly small fraction of an AI&#8217;s parameters do the heavy lifting when it comes to learning, has a kind of elegant parsimony that suggests it might be pointing at something deep about how these models work.</p>
<hr />
<p>Source: Yide Ran et al., &#8220;Mitigating Non-IID Drift in Zeroth-Order Federated LLM Fine-Tuning with Transferable Sparsity,&#8221; ICLR 2026. <a href="https://openreview.net/forum?id=2DuMBKVbX2">https://openreview.net/forum?id=2DuMBKVbX2</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Why does it matter that hospitals and schools can&#8217;t share their raw data when training AI?</strong></p>
<p>Privacy regulations in most countries prohibit sharing identifiable patient records, student files, or other sensitive information with outside parties, even for beneficial research purposes. Federated learning works around this by training AI models locally on each institution&#8217;s data and sharing only mathematical updates rather than the data itself. The problem until now has been that those updates were enormous, making frequent synchronisation impractical and limiting which institutions could realistically participate.</p>
<p><strong>Could training an AI on 0.1% of its parameters actually work as well as training all of them?</strong></p>
<p>That&#8217;s the counterintuitive finding at the heart of this research. The team identified that the most sensitive parameters during pre-training carry a disproportionate share of the gradient signal, with average squared gradients roughly 52 times higher than the next group. Restricting updates to these parameters not only reduces communication costs dramatically but actually improves performance compared to updating everything, because the sparse approach concentrates the limited signal available from zeroth-order estimation where it matters most.</p>
<p><strong>What is zeroth-order optimisation and why does it use less energy?</strong></p>
<p>Standard AI training uses backpropagation, which computes gradients by running calculations backward through the entire network and caching large amounts of intermediate data. Zeroth-order methods skip all of that; they simply perturb the model slightly, observe whether performance improved, and use the result to guide the next step. This requires only forward passes through the model, which needs far less memory and considerably less compute, making it better suited to devices with limited resources.</p>
<p><strong>How does MEERKAT know which participants have unreliable data?</strong></p>
<p>It uses a signal called GradIP, which measures how closely a participant&#8217;s estimated training gradients align with the gradients the model used during its original pre-training. Participants with highly skewed or imbalanced data tend to produce GradIP scores that steadily decline toward zero over their local training steps, while participants with balanced data produce scores that fluctuate. The server can detect this pattern without ever seeing the participants&#8217; actual data, then limit the influence of problematic clients in the next synchronisation round.</p>
<p><strong>Is this approach limited to healthcare and education, or could it apply more broadly?</strong></p>
<p>The underlying problem, training AI models collaboratively without sharing private data while managing bandwidth constraints and unequal data quality, arises in many domains. Financial institutions comparing fraud patterns, legal firms sharing case insights, and manufacturing systems pooling equipment performance data all face versions of the same challenge. MEERKAT&#8217;s authors note the approach is designed to be transferable; the sensitive parameter mask worked across multiple different calibration datasets including code and medical text, suggesting it isn&#8217;t narrowly specialised.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/sharing-less-data-could-make-ai-models-smarter-and-greener/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575940</post-id>	</item>
		<item>
		<title>Solar Cell Byproduct Could Beam Data Through Chips at the Speed of Light</title>
		<link>https://scienceblog.com/solar-cell-byproduct-could-beam-data-through-chips-at-the-speed-of-light/</link>
					<comments>https://scienceblog.com/solar-cell-byproduct-could-beam-data-through-chips-at-the-speed-of-light/#respond</comments>
		
		<dc:creator><![CDATA[Ben Sullivan]]></dc:creator>
		<pubDate>Tue, 05 May 2026 09:30:33 +0000</pubDate>
				<category><![CDATA[Physics & Mathematics]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://scienceblog.com/?p=575937</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p>There&#8217;s a mineral called bustamentite that almost nobody studies. It forms in flat hexagonal plates, grows readily in warm water, and turns up as an unwanted contaminant in perovskite solar cells. For decades it sat at the margins of materials science, useful mostly as a precursor for more fashionable compounds. Then a group of physicists decided to look at it very carefully in the terahertz range, and found something that had been hiding in plain sight since the 1970s: bustamentite, better known as lead iodide, can compress light to a degree that has no real parallel among materials tested at these frequencies.</p>
<p>The finding, published in <em>Nature Communications</em>, positions lead iodide as the first material to bring the strange physics of hyperbolic phonon-polaritons into the deep terahertz, a spectral region that the nanophotonics community has been trying to crack for years.</p>
<p>Phonon-polaritons are hybrid quasiparticles. In polar crystals like lead iodide, light and atomic vibrations can couple so tightly that neither propagates independently, and what you get instead is something with properties neither possesses alone. &#8220;It&#8217;s as if the phonon were dressed in light,&#8221; says Raul de Oliveira Freitas, who coordinates the Imbuia beamline at Brazil&#8217;s synchrotron facility LNLS-CNPEM and led the study. &#8220;The propagation characteristics and interaction with matter of these quasiparticles differ from both isolated light and isolated phonons.&#8221; In materials with a particular kind of crystalline anisotropy, these hybrids become hyperbolic: they travel through the bulk of the crystal along cone-shaped wavefronts, guided by the geometry of the lattice itself rather than by conventional optical boundaries.</p>
<h2>Why Terahertz Is Hard</h2>
<p>The terahertz gap has frustrated physicists partly because of scale. Light in this range has wavelengths of hundreds of micrometers. That&#8217;s a problem if you want to route it through circuits that fit on a chip. Confining terahertz radiation to the nanoscale requires suppressing the diffraction limit, the fundamental constraint that prevents ordinary optics from resolving or manipulating anything smaller than roughly half the wavelength. &#8220;In classical optics, it isn&#8217;t possible to observe or manipulate structures much smaller than the wavelength of light,&#8221; Freitas says. &#8220;With polaritons, we&#8217;ve managed to overcome that limit.&#8221;</p>
<p>Several materials had already shown this trick in the infrared range. Hexagonal boron nitride is the canonical example, with polaritons that propagate with low losses and squeeze infrared wavelengths into extraordinarily small volumes. But pushing comparable behavior to longer terahertz wavelengths meant finding a material with the right anisotropy at the right frequencies, and most of the obvious candidates didn&#8217;t have it. Lead iodide, it turns out, does. Its crystal structure produces a pronounced mismatch in how it responds to electric fields along different axes, generating a hyperbolic response in a band that runs from about 1.55 to 3.03 terahertz, a range that overlaps neatly with where next-generation wireless communication systems are heading. &#8220;Today, Wi-Fi and 5G operate at frequencies of a few gigahertz,&#8221; Freitas notes. &#8220;But there is interest in moving toward hundreds of gigahertz, or even terahertz, because the higher the frequency, the greater the bandwidth and data transmission capacity.&#8221;</p>
<p>To see the polaritons directly, the team used scattering-type near-field optical microscopy, a technique in which a platinum-coated atomic force microscope tip is illuminated by a terahertz laser. The tip acts as an antenna, concentrating the field at its apex to a hotspot of a few tens of nanometers. &#8220;The electric field density in s-SNOM probes is up to 10<sup>5</sup> times higher than in free waves,&#8221; Freitas says. The result is that a wave 200 micrometers long gets squeezed into a volume smaller than 50 nanometers. In the images from the microscope, the polaritons appear as ripple patterns of alternating bright and dark fringes, propagating outward from the edges of thin lead iodide crystals with a regularity that matched theory almost exactly.</p>
<h2>A Quality Factor That Surprises</h2>
<p>Quality factor is a measure of how long a system sustains oscillation before dissipating its energy as heat. High quality factors are what separate a useful resonator from a leaky one. Lead iodide&#8217;s quality factor in the terahertz range, assessed by the team using a figure of merit that tracks how many oscillation cycles a polariton completes before it damps out, reached 17 for a crystal 340 nanometers thick. That&#8217;s on par with hexagonal boron nitride in its native infrared range, and comfortably above molybdenum trioxide, the other leading benchmark material, whose maximum in the mid-infrared is around 12. &#8220;The longer the system oscillates, the higher the quality factor,&#8221; Freitas says. &#8220;PbI<sub>2</sub> performed comparably to hexagonal boron nitride, which is the reference material in the infrared range.&#8221;</p>
<p>The confinement numbers are perhaps more striking. A 144-nanometer-thick flake compressed the terahertz wavelength by a factor of 264: that is, the polariton wavelength inside the crystal was 264 times shorter than the free-space wavelength of the light driving it. Go to a flake thinner than 100 nanometers and the factor exceeds 300. This happens because thinner crystals force the polariton into a tighter mode, and lead iodide&#8217;s high ionicity, a large charge transfer in its chemical bonds, sustains the coupling without extracting too heavy a penalty in losses. The material&#8217;s extreme dielectric anisotropy, captured in a Lyddane-Sachs-Teller ratio of 4.20 (versus 1.41 for hexagonal boron nitride), gives it a hyperbolic band that is far broader than in comparable materials, which translates into more usable frequencies and more flexible device design.</p>
<p>What makes the finding practically appealing is how unremarkable lead iodide is to produce. Growing hexagonal boron nitride to research quality requires extreme pressure, high temperatures, and decades of accumulated expertise; only a handful of groups worldwide can do it reliably. Lead iodide crystallizes from water. &#8220;Simply dissolve the salt in water until a supersaturated solution is obtained and heat it to about 80 degrees Celsius,&#8221; Freitas explains, &#8220;something that can be done on a household stove. During cooling, the material crystallizes, forming structures that can be collected.&#8221; The synthesis is hydrothermal, the precursors are cheap and common, and because iodine is monoisotopic and lead has minimal natural isotopic variation, the resulting crystals are reproducible in a way that matters enormously for device fabrication.</p>
<p>The broader vision is for photonic circuits inside chips, where information moves on light rather than electrons. &#8220;Currently, information is transmitted within devices via electrons,&#8221; Freitas says. &#8220;Using light can drastically increase speed and reduce losses. It&#8217;s analogous to what happened in the field of telecommunications. Before, we used electrical cables; today, we use optical fibers.&#8221; The same logic, applied at the scale of integrated circuits, suggests a future where processor-to-processor communication happens at the speed of light through terahertz waveguides, consuming a fraction of the energy that copper interconnects require. The team has designed experiments around lead iodide functioning as a resonator, a beam splitter, and a modulator; all three functions are demonstrated in the paper.</p>
<p>The material&#8217;s role in perovskite research adds an adjacent strand to the story. Lead iodide is a ubiquitous precursor for the perovskite compounds now competing seriously with silicon in solar cell efficiency tables, and understanding its optical phonon behavior may help explain why excess lead iodide in perovskite films sometimes passivates defects and sometimes accelerates degradation. Whether or not that turns out to be a useful connection, &#8220;the expectation of the scientific community,&#8221; Freitas says, &#8220;is to make light circuits increasingly present in everyday devices.&#8221; The material that was cluttering up solar cells may turn out to be what makes that possible.</p>
<p><a href="https://doi.org/10.1038/s41467-026-69027-6">DOI: 10.1038/s41467-026-69027-6</a></p>
<hr />
<h2>Frequently Asked Questions</h2>
<p><strong>Why does compressing terahertz light to the nanoscale matter for future technology?</strong></p>
<p>Terahertz radiation has wavelengths of hundreds of micrometers in free space, which makes it impossible to route through chip-scale components using conventional optics. Phonon-polaritons in lead iodide compress that wavelength by a factor of 264 or more, shrinking the effective size of the light to a point where it could feasibly travel through nanoscale waveguides and resonators on a chip. That opens a path to optical data links inside devices that would be dramatically faster and more energy-efficient than the copper interconnects used today.</p>
<p><strong>How is lead iodide different from the materials researchers normally use for this kind of physics?</strong></p>
<p>Hexagonal boron nitride is the gold standard for phonon-polariton research, but it only works in the infrared range and is notoriously difficult to synthesize at research quality. Lead iodide crystallizes from warm water in a process that requires no specialist equipment, and its natural isotopic consistency means crystals are reproducible. Crucially, it operates in the terahertz range rather than the infrared, which is where next-generation wireless communication is heading.</p>
<p><strong>Is lead iodide safe to work with at this scale?</strong></p>
<p>Lead compounds require standard laboratory precautions because of lead&#8217;s well-documented toxicity. The paper does not address this directly, but lead iodide is already handled routinely in perovskite solar cell research, where it is synthesized and processed in academic and industrial labs worldwide. Whether its toxicity profile presents a barrier to large-scale chip integration is a practical question that research at this stage has not yet confronted.</p>
<p><strong>Could this technology actually replace electronic circuits in the near future?</strong></p>
<p>The research is at the basic science stage, demonstrating that the material can confine and guide terahertz light with low losses. The gap between a proof-of-concept polariton waveguide and a manufacturable photonic chip is significant, involving challenges in fabrication, integration with existing electronics, and device engineering that remain largely unsolved. The researchers describe lead iodide as a candidate for resonators, beam splitters, and modulators, which are the building blocks for circuits, but none of those components have been built into working devices yet.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://scienceblog.com/solar-cell-byproduct-could-beam-data-through-chips-at-the-speed-of-light/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">575937</post-id>	</item>
	</channel>
</rss>
