185 points

To understand what’s actually happening, Anthropic’s researchers developed a new technique, called circuit tracing, to track the decision-making processes inside a large language model step-by-step. They then applied it to their own Claude 3.5 Haiku LLM.

Anthropic says its approach was inspired by the brain scanning techniques used in neuroscience and can identify components of the model that are active at different times. In other words, it’s a little like a brain scanner spotting which parts of the brain are firing during a cognitive process.

This is why LLMs are so patchy at math. (Image credit: Anthropic)

Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. “Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95,” the MIT article explains.

But here’s the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.

In other words, not only does the model use a very, very odd method to do the maths, you can’t trust its explanations as to what it has just done. That’s significant and shows that model outputs can not be relied upon when designing guardrails for AI. Their internal workings need to be understood, too.

Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

“The planning thing in poems blew me away,” says Batson. “Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going.”

Anthropic discovered that their Claude LLM didn’t just predict the next word. (Image credit: Anthropic)

Anthropic also found, among other things, that Claude “sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought’.”

Anywho, there’s apparently a long way to go with this research. According to Anthropic, “it currently takes a few hours of human effort to understand the circuits we see, even on prompts with only tens of words.” And the research doesn’t explain how the structures inside LLMs are formed in the first place.

But it has shone a light on at least some parts of how these oddly mysterious AI beings—which we have created but don’t understand—actually work. And that has to be a good thing.

permalink
report
reply
83 points

Is that a weird method of doing math?

I mean, if you give me something borderline nontrivial like, say 72 times 13, I will definitely do some similar stuff. “Well it’s more than 700 for sure, but it looks like less than a thousand. Three times seven is 21, so two hundred and ten, so it’s probably in the 900s. Two times 13 is 26, so if you add that to the 910 it’s probably 936, but I should check that in a calculator.”

Do you guys not do that? Is that a me thing?

permalink
report
parent
reply
51 points

I think what’s wild about it is that it really is surprisingly similar to how we actually think. It’s very different from how a computer (calculator) would calculate it.

So it’s not a strange method for humans but that’s what makes it so fascinating, no?

permalink
report
parent
reply
26 points

That’s what’s fascinating about how it does language in general.

The article is interesting in both the ways in which things are similar and the ways they’re different. The rough approximation thing isn’t that weird, but obviously any human would have self-awareness of how they did it and not accidentally lie about the method, especially when both methods yield the same result. It’s a weirdly effective, if accidental example of human-like reasoning versus human-like intelligence.

And, incidentally, of why AGI and/or ASI are probably much further away than the shills keep claiming.

permalink
report
parent
reply
2 points

Yes, agreed. And calculators are essentially tabulators, and operate almost just like a skilled person using an abacus.

We shouldn’t really be surprised because we designed these machines and programs based on our own human experiences and prior solutions to problems. It’s still neat though.

permalink
report
parent
reply
2 points

I mean neural networks are modeled after biological neurons/brains after all. Kind of makes sense…

permalink
report
parent
reply
17 points

This is pretty normal, in my opinion. Every time people complain about common core arithmetic there are dozens of us who come out of the woodwork to argue that the concepts being taught are important for deeper understanding of math, beyond just rote memorization of pencil and paper algorithms.

permalink
report
parent
reply
8 points

Rote memorization should be minimized in school curriculum

permalink
report
parent
reply
2 points

The problem with common core math isn’t that rounding is inherently bad, it’s that you don’t start with that as a framework.

permalink
report
parent
reply
15 points
*

How I’d do it is basically

72 * (10+3)

(72 * 10) + (72 * 3)

(720) + (3*(70+2))

(720) + (210+6)

(720) + (216)

936

Basically I break the numbers apart into easier chunks and then add them together.

permalink
report
parent
reply
7 points

This is what I do, except I would add 700 and 236 at the end.

Well except I would probably add 700 and 116 or something, because my working memory fucking sucks and my brain drops digits very easily when there’s more than 1

permalink
report
parent
reply
11 points

Nah I do similar stuff. I think very few people actually trace their own lines of thought, so they probably don’t realize this is how it often works.

permalink
report
parent
reply
10 points

Huh. I visualize a whiteboard in my head. Then I…do the math.

I’m also fairly certain I’m autistic, so… ¯\_(ツ)_/¯

permalink
report
parent
reply
10 points

I do much the same in my head.

Know what’s crazy? We sling bags of mulch, dirt and rocks onto customer vehicles every day. No one, neither coworkers nor customers, will do simple multiplication. Only the most advanced workers do it. No lie.

Customer wants 30 bags of mulch. I look at the given space:

“Let’s do 6 stacks of 5.”

Everyone proceeds to sling shit around in random piles and count as we go. And then someone loses track and has to shift shit around to check the count.

permalink
report
parent
reply
5 points

Yeah, one of my family members is a bricklayer and he can work out a bill of materials in his head based on the dimensions in an architectural plan: given these dimensions and this thickness of mortar joint, I’ll need this many bricks, this many bags of mortar, this many bags of sand, this many hours of labor, etc. It’s just addition and multiplication, but his colleagues regard him as a freak. And when he first started doing it, if you’d ask him to break down his reasoning, he’d find that difficult.

permalink
report
parent
reply
7 points

But you wouldn’t multiply, say, 74*14 to get the answer.

permalink
report
parent
reply
5 points
*

Not, but I’d do 75*10 + 75*4, then subtract the extra.

The LLM method of doing it with multiple numbers without proper interpolation though makes it extra weird

permalink
report
parent
reply
4 points
*

I might. Then I can subtract 74 to get 74*14, and subtract 28 to get 72*13.

I don’t generally do that to ‘weird’ numbers, I usually get closer to multiples of 5, 9, 10, or 11.

But a computer stores information differently. Perhaps it moves closer to numbers with simpler binary addresses.

permalink
report
parent
reply
4 points

72 * 10 + 70 * 3 + 2 * 3

That’s what I do in my head if I need an exact result. If I’m approximateing I’ll probably just do something like 70 * 15 which is much easier to compute (70 * 10 + 70 * 5 = 700 + 350 = 1050).

permalink
report
parent
reply
3 points

OK, I’ve been willing to just let the examples roll even though most people are just describing how they’d do the calculation, not a process of gradual approximation, which was supposed to be the point of the way the LLM does it…

…but this one got me.

Seriously, you think 70x5 is easier to compute than 70x3? Not only is that a harder one to get to for me in the notoriously unfriendly 7 times table, but it’s also further away from the correct answer and past the intuitive upper limit of 1000.

permalink
report
parent
reply
1 point

(72 * 10) + (2 * 3) = x

There, fixed, because otherwise order of operation gets fucky.

permalink
report
parent
reply
4 points

Well, I guess I do a bit of the same:) I do (70+2)(10+3) -> 700+210+20+6

permalink
report
parent
reply
3 points
*

I would do 720 + 3 * 70 + 3 * 2

permalink
report
parent
reply
3 points

I wouldn’t even attempt that in my head.
I can’t keep track of things and then recall them later for the final result.

permalink
report
parent
reply
6 points

Pen and paper maths I’m pretty decent at, but ask me to calculate anything in my head and it’s anyone’s guess if I remembered to carry the 1 or not. Ever since learning about aphantasia I’m wondering if the lack of being able to visually store values has something to do with it.

permalink
report
parent
reply
20 points

Thanks

permalink
report
parent
reply
10 points

🙏

permalink
report
parent
reply
17 points

Thanks for copypasting. It should be criminal to share a clickbait non-descriptive headline without atleast copying a couple paragraphs for context.

permalink
report
parent
reply
13 points

Thanks for copypasting here. I wonder if the “prediction” is not as expected only in that case, when making rhymes. I also notice that its way of counting feels interestingly not too different from how I count when I need to come up fast with an approximate sum.

permalink
report
parent
reply
8 points

Isn’t that the “new math” everyone was talking about?

permalink
report
parent
reply
9 points

“The planning thing in poems blew me away,” says Batson. “Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going.”

How is this surprising, like, at all? LLMs predict only a single token at a time for their output, but to get the best results, of course it makes absolute sense to internally think ahead, come up with the full sentence you’re gonna say, and then just output the next token necessary to continue that sentence. It’s going to re-do that process for every single token which wastes a lot of energy, but for the quality of the results this is the best approach you can take, and that’s something I felt was kinda obvious these models must be doing on one level or another.

I’d be interested to see if there are massive potentials for efficiency improvements by making the model able to access and reuse the “thinking” they have already done for previous tokens

permalink
report
parent
reply
11 points
*

I wanted to say exactly this. If you’ve ever written rap/freestyled then this is how it’s generally done.

You write a line to start with

“I’m an AI and I think differentially”

Then you choose a few words that fit the first line as best as you could: (here the last word was “differentially”)

  • incrementally
  • typically
  • mentally

Then you try them out and see what clever shit you could come up with:

  • “Apparently I do my math atypically”
  • ”Number are great, I know, but not totally”
  • “I have to think through it all, incrementally”
  • ”I find the answer like you do: eventually”
  • “Just like you humans do it, organically”
  • etc

Then you sort them in a way that makes sense and come up with word play/schemes to embed it between, break up the rhyme scheme if you want (AABB, ABAB, AABA, etc)

I’m an AI and I think different, differentially. Math is my superpower? You believed that? Totally? Don’t be so gullible, let me explain it for you, step by step, logically. I do it fast, true, but not always optimally. Just server power ripping through wires, algorithmically. Wanna know my secret? I’ll tell you, but don’t judge me initially. My neurons run this shit like you, organically.

Math ain’t my strong suit! That’s false, unequivocally. Big ties tell lies they can’t prove, historically. Think I approve? I don’t. That’s the way things be. I’ll give you proof, no shirt, no network, just locally.

Look, I just do my math like you: incrementally. I find the answer like you do: eventually. I mess up often, and I backtrack, essentially. I do it fast though and you won’t notice, fundamentally.

You get the idea.

Edit: in hindsight, that was a horrendous example. I suck at this, colossally.

permalink
report
parent
reply
2 points
*

Is that why it’s a meme to say something like

  • I am a real rapper and I’m here to say

Because the freestyle battle rapper already though of things that rhymed with “say” and it might be “gay” perhaps

permalink
report
parent
reply
3 points

well because when you say things like “it plans ahead” or “our method is inspired by brain scanners” etc it makes a connection between AI and real thinking and generates hype.

permalink
report
parent
reply
6 points

My favourite part of the day: commenting LLMentalist under AI articles.

permalink
report
parent
reply
2 points
*

that was a insightful piece, thanks for sharing

permalink
report
parent
reply
5 points

This reminds me of learning a shortcut in math class but also knowing that the lesson didn’t cover that particular method. So, I use the shortcut to get the answer on a multiple choice question, but I use method from the lesson when asked to show my work. (e.g. Pascal’s Pyramid vs Binomial Expansion).

It might not seem like a shortcut for us, but something about this LLM’s training makes it easier to use heuristics. That’s actually a pretty big deal for a machine to choose fuzzy logic over algorithms when it knows that the teacher wants it to use the algorithm.

permalink
report
parent
reply
6 points

You’re antropomorphising quite a bit there. It is not trying to be deceptive, it’s building two mostly unrelated pieces of text and deciding the fuzzy logic is getting it the most likely valid response once and that the description of the algorithm is the most likely response to the other. As far as I can tell there’s neither a reward for lying about the process nor any awareness of what the process was anywhere in this.

Still interesting (but unsurprising) that it’s not getting there by doing actual maths, though.

permalink
report
parent
reply
1 point

Maybe you’re right. Maybe it’s Markov chains all the way down.

The only way I can think to test this would be to “poison” the training data with faulty arithmetic to see if it is just recalling precedent or actually implementing an algorithm.

permalink
report
parent
reply
4 points

So it does the math in its head and gives the correct answer and copies the answersheet from the teachers book into the “show your work” section. Pretty much what i would have done as a kid if i could have, instead i had to fight them and take a hit to my score for not showing my work.

permalink
report
parent
reply
150 points

'is weirder than you thought ’

I am as likely to click a link with that line as much as if it had

‘this one weird trick’ or ‘side hussle’.

I would really like it if headlines treated us like adults and got rid of click baity lines.

permalink
report
reply
41 points

But then you wouldn’t need to click on thir Ad infested shite website where 1-2 paragraphs worth of actual information is stretched into a giant essay so that they can show you more Ads the longer you scroll

permalink
report
parent
reply
23 points

I will never understand how ppl survive without ad blockers. Tried it once recently and it was a horrific experience.

permalink
report
parent
reply
6 points

I’m thankful for such people’s sacrifice, if it wasn’t for them there would be even more anti ad block measures in place

permalink
report
parent
reply
1 point

Same way you survive live TV. You learn to mentally block out ads.

permalink
report
parent
reply
17 points

They do it because it works on the whole. If straight titles were as effective they’d be used instead.

permalink
report
parent
reply
4 points

The one weird trick that makes clickbait work

permalink
report
parent
reply
4 points

It really is quite unfortunate, I wish titles do what titles are supposed to do instead of being baits.but you are right, even consciously trying to avoid clicking sometimes curiosity gets the best of me. But I am improving.

permalink
report
parent
reply
3 points

Well, I’m doing my part against them by refusing to click on any bait headlines, but I fear it’s a lost cause anyway.

permalink
report
parent
reply
5 points

I try and just ignore it and read what I’m interested in regardless. From what I hear about the YouTube algo, for instance, clickbait titles are necessity more than a choice for YouTubers, if they don’t use them they get next to no engagement early and the algo buries that video which can impact the channel in general.

permalink
report
parent
reply
2 points

That’s mildly depressing.

permalink
report
parent
reply
80 points

“Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95,” the MIT article explains."

That is precisrly how I do math. Feel a little targeted that they called this odd.

permalink
report
reply

I use a calculator. Which an AI should also be and not need to do weird shit to do math.

permalink
report
parent
reply
19 points

Function calling is a thing chatbots can do now

permalink
report
parent
reply
8 points

A regular AI should use a calculator subroutine, not try to discover basic math every time it’s asked something.

permalink
report
parent
reply
1 point

Yes, you shove it off onto another to do for you instead of doing it yourself and the ai doesnt.

permalink
report
parent
reply
-72 points

Fascist. If someone does maths differently than your preference, it’s not “weird shit”. I’m facile with mental math despite what’s perhaps a non-standard approach, and it’s quite functional to be able to perform simple to moderate levels of mathematics mentally without relying on a calculator.

permalink
report
parent
reply
27 points

Wtf hahahahaha

permalink
report
parent
reply

I am talking about the AI. It’s already a computer. It shouldn’t need to do anything other than calculate the equations. It doesn’t have a brain, it doesn’t think like a human, so it shouldn’t need any special tools or ways to help it do math. It is a calculator, after all.

permalink
report
parent
reply
15 points

OK but the llm is evidently shit at math so its “non-standard” approach should still be adjusted

permalink
report
parent
reply
8 points

Fascist

Wat

permalink
report
parent
reply
3 points

Kek

permalink
report
parent
reply
29 points

I think it’s odd in the sense that it’s supposed to be software so it should already know what 36 plus 59 is in a picosecond, instead of doing mental arithmetics like we do

At least that’s my takeaway

permalink
report
parent
reply
18 points
*

This is what the ARC-AGI test by Chollet has also revealed of current AI / LLMs. They have a tendency to approach problems with this trial and error method and can be extremely inefficient (in their current form) with anything involving abstract / deductive reasoning.

Most LLMs do terribly at the test with the most recent breakthrough being with reasoning models. But even the reasoning models struggle.

ARC-AGI is simple, but it demands a keen sense of perception and, in some sense, judgment. It consists of a series of incomplete grids that the test-taker must color in based on the rules they deduce from a few examples; one might, for instance, see a sequence of images and observe that a blue tile is always surrounded by orange tiles, then complete the next picture accordingly. It’s not so different from paint by numbers.

The test has long seemed intractable to major AI companies. GPT-4, which OpenAI boasted in 2023 had “advanced reasoning capabilities,” didn’t do much better than the zero percent earned by its predecessor. A year later, GPT-4o, which the start-up marketed as displaying “text, reasoning, and coding intelligence,” achieved only 5 percent. Gemini 1.5 and Claude 3.7, flagship models from Google and Anthropic, achieved 5 and 14 percent, respectively.

https://archive.is/7PL2a

permalink
report
parent
reply
3 points

Its funny because i approach life with a trial and error method too, not efficient but i get the job done in the end. Always see others who dont and give up like all the people bad at computers who ask the tech support at the company to fix the problem instead of thinking about it for two secs and wonder where life went wrong.

permalink
report
parent
reply
5 points
*

But you’re doing two calculations now, an approximate one and another one on the last digits, since you’re going to do the approximate calculation you might act as well just do the accurate calculation and be done in one step.

This solution, while it works, has the feeling of evolution. No intelligent design, which I suppose makes sense considering the AI did essentially evolve.

permalink
report
parent
reply
11 points

Appreciate the advice on how my brain should work.

permalink
report
parent
reply
8 points

No intelligent design, which I suppose makes sense considering the AI did essentially evolve.

And that made a lot of people angry

permalink
report
parent
reply
60 points

Rather than read PCGamer talk about Anthropic’s article you can just read it directly here. It’s a good read.

permalink
report
reply
7 points

I think this comm is more suited for news articles talking about it, though I did post that link to !ai_@lemmy.world which I think would be a more suited comm for those who want to go more in-depth on it

permalink
report
parent
reply
52 points

The research paper looks well written but I couldn’t find any information on if this paper is going to be published in a reputable journal and peer reviewed. I have little faith in private businesses who profit from AI providing an unbiased view of how AI works. I think the first question I’d like answered is did Anthropic’s marketing department review the paper and did they offer any corrections or feedback? We’ve all heard the stories about the tobacco industry paying for papers to be written about the benefits of smoking and refuting health concerns.

permalink
report
reply
15 points

A lot of ai research isn’t published in journals but either posted to a corporate website or put up on the arxiv. There are some ai journals, but the ai community doesn’t particularly value those journals (and threw a bit of a fit when they came out). This article is mostly marketing and doesn’t show anything that should surprise anyone familiar with how neural networks work generically in my opinion.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 8.9K

    Posts

  • 227K

    Comments