You are viewing a single thread.
View all comments View context
1 point

It’s not. It’s reflecting it’s training material. LLMs and other generative AI approaches lack a model of the world which is obvious on the mistakes they make.

permalink
report
parent
reply
-1 points
*

You could say our brain does the same. It just trains in real time and has much better hardware.

What are we doing but applying things we’ve already learnt that are encoded in our neurons. They aren’t called neural networks for nothing

permalink
report
parent
reply
1 point

You could say that but you’d be wrong.

permalink
report
parent
reply
-1 points
*

Tabula rasa, piss and cum and saliva soaking into a mattress. It’s all training data and fallibility. Put it together and what have you got (bibbidy boppidy boo). You know what I’m saying?

permalink
report
parent
reply
1 point

Magical thinking?

permalink
report
parent
reply
-1 points
*

Okay, now you’re definitely protecting projecting poo-flicking, as I said literally nothing in my last comment. It was nonsense. But I bet you don’t think I’m an LLM.

permalink
report
parent
reply

Programmer Humor

!programmer_humor@programming.dev

Create post

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

  • Keep content in english
  • No advertisements
  • Posts must be related to programming or programmer topics

Community stats

  • 2.9K

    Monthly active users

  • 800

    Posts

  • 12K

    Comments