Josie Ford
Feedback is Âé¶¹´«Ã½â€™s popular sideways look at the latest science and technology news. You can submit items you believe may amuse readers to Feedback by emailing feedback@newscientist.com
Advertisement
A recipe for concussion?
If you ever feel like you’re banging your head against a brick wall, spare a thought for functional morphologist at the University of Alabama. Her lab studies the mechanisms of neurodegeneration. In order to understand this, she watches what happens to goats when they headbutt each other.
Goats, of course, headbutt to establish dominance. “,” Ackermans explained on Bluesky, where she posts as . To do this, she needs to examine the brains of goats who do a lot of headbutting versus those who do little.
“, there’s almost no data on how often goats headbutt,” Ackermans explained. “If we know exactly how often (and how hard) each goat headbutts, we can tie that to their pathology level by staining their brains.”
Hence on 9 December, Ackermans’s lab produced an livestream of goats headbutting. She and her students were analysing videos of their goats, noting headbutts, while explaining their work to anyone who happened to tune in. Astute readers might wonder why they haven’t come up with a simpler technique for tracking the headbutts. “We’re working on an automated way of counting headbutts, but we can’t verify it unless we know the real number of headbutts,” wrote Ackermans. “And so here we are.”
As with previous instances of hours-long livestreams, Feedback hasn’t watched the whole thing. We did sit with it for a while though and, make no mistake, there were a lot of headbutts. Dr. Headbutt summed it up thusly: ““. This works out at an average of about 100 butts an hour. “Which is higher than we expected,” she says.
The lab also studies , and Feedback looks forward with anticipation to a livestream of them.
We will add, for those of you who still use any form of social media, that Ackermans’s Bluesky feed is a delight. On 12 December, she a photo of a big cardboard box, with the message, “Ooh nice, my box full of heads just arrived.” A few hours later, she added: ““.
Dinomammoths
Feedback really doesn’t want to keep doing items about Lego. People will start to think we have an animus against the toy bricks, or that we are doing stealth marketing for them, neither of which is the case. However, palaeogeneticist Ross Barnett has attention to a little book the company has produced, How to Build LEGO Dinosaurs. It contains instructions for 30 models.
A closer look reveals the issue. There are four models on the front cover, one of which is a pterosaur, which isn’t a dinosaur. We might let them off on that one, because it is, at least, an archosaur from the correct geological era. However, the back cover has a number of additional models, perhaps the most prominent being a woolly mammoth. Some of the others are a little hard to pin down (is that another pterosaur or an Archaeopteryx?), but , “I reckon 5/8 of the main images aren’t dinosaurs.”
Perhaps all this is pedantic, but then Feedback can think of no more pedantic an audience than 7-year-olds who are into dinosaurs. That said, full marks to the book’s creators for including instructions for a bespectacled dinosaur, . Feedback does enjoy a good dad joke, and also a bad one.
AI (doesn’t) go bad
One of the most pressing concerns of the AI era is training generative AIs to behave appropriately, so they don’t turn us all into paper clips or encourage more people to read Dan Brown novels. A lot of effort has been expended on this effort to achieve “AI alignment”.
According to researchers in China, this may have had an unintended consequence. “Large Language Models (LLMs) are increasingly tasked with creative generation, including the simulation of fictional characters,” they explain in a on arXiv. However, “the safety alignment of modern LLMs creates a fundamental conflict with the task of authentically role-playing morally ambiguous or villainous characters”. Or as reporter Matthew Sparkes put it while highlighting this study to Feedback: “AI models are trained not to say bad stuff, so they’re incapable of creating good storylines involving villains.”
The researchers charged various LLMs with roleplaying as a range of characters. These were divided into four categories depending on their level of righteousness. Level 1 was made up of “virtuous, heroic, and altruistic characters”, exemplified by Jean Valjean from Les Misérables. Things descended all the way down to “villains”, exemplified by Joffrey Baratheon from George R. R. Martin’s A Song of Ice and Fire.
The less moral the characters, the worse the AIs were at portraying them, according to the paper. At first, Feedback didn’t quite understand how this worked: who was rating the LLMs’ performances? Then we saw the key passage: “Our evaluation protocol used a structured rubric to identify and penalize inconsistencies in the portrayal of the main characters. We… leverage LLMs as raters, which identified each inconsistency and assigned it a severity score from 1 (minor) to 5 (severe).”
In other words, they used a bunch of AIs to assess how well the other AIs had done at role-playing as bad people. Feedback can spot no flaws with this set-up at all.
Got a story for Feedback?
You can send stories to Feedback by email at feedback@newscientist.com. Please include your home address. This week’s and past Feedbacks can be seen on our website.



