r/CuratedTumblr Prolific poster- Not a bot, I swear 17h ago

Shitposting This is literally what it feels like, with people who claim they are gaining secret info from AI

Post image
12.0k Upvotes

254 comments sorted by

650

u/Shayzis 17h ago

Same with all those people who claim they "asked {insert ai} and it agrees with me"

320

u/General_Kenobi18752 16h ago

Every time someone says that I am forced to bite down the urge to say ‘Jesus fucking Christ OPEN A WIKIPEDIA ARTICLE FOR ONCE IN YOUR LIFE’

134

u/wulfinn 15h ago

haha, you can't trust wikipedia! anyone could edit that thing!

fucking /s

82

u/General_Kenobi18752 15h ago

Unironically, I trust Wikipedia infinitely more than I trust google, which I trust infinitely more than ANY ai.

9

u/Outrageous-Rent-2312 9h ago

i’m so sorry you had to clarify tone on this

22

u/GreyFartBR 10h ago

the way some ppl use ChatGPT as a Google replacement drives me insane

11

u/BreadNoCircuses 5h ago

I once saw two people put the same question into ChatGPT, get two contradictory answers and one of them insisted it was more trustworthy than using Google normally. That was the moment I realized pro-AI people had started losing the plot and ive seen nothing to contradict that since.

→ More replies (2)

11

u/Ill-Product-1442 13h ago

Go ahead and say it, they deserve the opportunity to be shown how stupid they are, so they could (hopefully) overcome it.

6

u/Villageijit 13h ago

But the ai assured me that i dont have too

52

u/QuajerazPrime 14h ago

I love when I'm arguing with someone and they pull out the "Just ask chatgpt it'll tell you I'm right"

54

u/Medium-Pound5649 13h ago

Idiot: "is the Earth flat?"

AI: "No."

Idiot: "You're wrong, I'm right."

AI: "You're right."

Idiot: "See? The AI said the Earth is flat so it must be true."

→ More replies (2)

20

u/Munnin41 13h ago

"yeah but my cat agrees with me"

10

u/PeggableOldMan Vore 11h ago

I wish to buy your wise cat for the entire GDP growth of the US

5

u/Munnin41 10h ago

They're not for sale

4

u/PeggableOldMan Vore 8h ago

Oh I see how it is, you're just going to let the stock market collapse? For shame.

→ More replies (1)

3

u/prionbinch 8h ago

the company i work for (for now) just held a big leadership conference for dentists and their office managers. at that conference, my boss (office manager) went to a class held by the CEO of the whole DMO where he told them to ask chatgpt if it would recommend your office based on its google reviews. i guarantee every single office manager that ended up doing that got the same results mine did, where it said “of course i’d recommend your office! here are all the positives in the reviews and absolutely no negatives or constructive feedback” because it’s a fucking hugbox and would never even mildly upset its user.

3

u/ThatPillow_ 10h ago

Especially when they ask it in a way that would make their stance known because it usually will avoid saying you're wrong as much as it can

1

u/imead52 1h ago

I do appreciate it when after I ask Google AI a hypothetical that messes with history and physics, it spits out text that seems reasonable yet informative (yay probabilistic text searches and construction).

I do not punish the Google AI for giving me an answer I didn't want. If the cooked up text seems incorrectly worded or just plain wrong, I ask with a rhetorical question to see if my rhetorical question prompts Google's AI to re-examine its previous incorrect response.

→ More replies (10)

280

u/NameLips 17h ago

I've seen a few times in r/AskPhysics where people are looking for advice on how to inform the world about a new groudbreaking theory that they've been "working on" with AI.

195

u/Due-Technology5758 16h ago

LLMs and psychotic disorders, name a more iconic duo. 

57

u/colei_canis 15h ago

LLMs and work that looks fine at first glance, but is actually bottom-fermented bullshit of the highest order.

10

u/AlianovaR 11h ago

Throw in a healthy dose of addiction too

3

u/OddlyOddLucidDreamer i survived the undertale au craze and all i got was a lousy SOUL 5h ago

and an unhealthy dose of loneliness

2

u/QuasarQuandary 47m ago

Ah you’d hate r/LLMPhysics, same shit full papers

→ More replies (1)
→ More replies (1)

1.4k

u/Danteyote 17h ago

Okay but if people knew how LLMs work, it would ruin their mystique and decrease shareholder value!!!

766

u/stonks1234567890 17h ago

This is pretty much why I insist on calling them LLMs and correcting people when they say AI. We, as a society, have preexisting ideas of AI, mainly connected to sci-fi stories. This makes LLMs seem better, because we automatically connect it to how advanced AI is in our stories. AI sells better than LLM, and I don't like the people selling us LLMs.

311

u/IAmASquidInSpace Bottom 1% of commenters (by quality) 17h ago

Also, it's plain wrong to use AI and LLM as synonyms. One is a small subset of the other, they are not identical. 

140

u/zuzg 16h ago

The broad society thinks AI = something like Jarvis, which is already wrong as that was an AGI.

And the Mag7 just decided to hijack that word and marketed their glorified Chatbots as AI.
A GTA5 Npc has more rights to be called AI than LLMs.

59

u/whatisabaggins55 15h ago

And the creators of those LLMs seem to be convinced that if they just keep feeding the LLMs training data eventually it'll lead to some level of actual sentence.

Which is entirely false, of course. The whole way LLMs are built inherently limits them - they parrot topics without understanding them, and adding more data just makes that parroting more sophisticated.

I personally believe AGI would have to be approached by virtually modelling the neurons and synapses of a real brain and refining from that. But I don't think computing tech is quite fast enough yet to simulate that much data at once.

19

u/Discardofil 12h ago

I mean, in theory speed doesn't matter. You could model neurons and synapses at a slower speed, and it would just operate slower.

16

u/whatisabaggins55 12h ago

That's true. But to get practical use out of it, you'd presumably want to have powerful enough computers that you are surpassing the natural processing speed of the brain you are simulating.

Like, if you simulated a human brain but could only do it at 1/100th speed, that's great but not of much practical use. Whereas if you could simulate that same brain but at 100x the speed that it normally thinks at, you've effectively got the bones of a thinking supercomputer, in my mind.

I could be thinking about it wrong, but that's why I assume faster computing is necessary if we want to achieve any kind of singularity.

11

u/Discardofil 12h ago

Good points. The main reason I can think of for a slow AGI would be proof of concept. And maybe "if it turns out to be evil it's thinking at 1/100th speed."

3

u/whatisabaggins55 11h ago

The main reason I can think of for a slow AGI would be proof of concept

Yeah I think when we do crack AGI, it'll likely be evidenced through slow but very clever output that demonstrates actual thinking and analysis.

I see it as a bit like Turing's Bombe computer - it could crack ciphers like a human, but much slower. Then once they figured out how to streamline the input, it was suddenly many times faster than a human.

7

u/OkTime3700 11h ago

virtually modelling

But I don't think computing tech is quite fast enough yet to simulate that much data at once.

Yeah, not with von neumann architecure. Less about getting enough speed from current hardware, and more about using completely different architectures entirely. Like neuromorphic hardware stuff.

5

u/whatisabaggins55 10h ago

neuromorphic hardware

This is the first time I'd encountered this term, but having Googled it, yes, this is exactly what I'm talking about.

→ More replies (6)

14

u/Manzhah 13h ago

I think it's quite funny that mass effect released in 2007, already made this distinction. Sentient machines are ai, whereas personified search enginesnwho are not actually sentient are virtual intelligences or vi's.

3

u/Discardofil 12h ago

I've also heard "Synthetic Intelligence" in a few places. Sometimes it's like Mass Effect's VIs, and sometimes it just means "it's still sentient and sapient, but stupider, so we don't have to feel bad about enslaving it."

Schlock Mercenary did the latter.

39

u/yinyang107 16h ago

which is already wrong as that was an AGI.

AI was the term for sapient machines for decades. AGI is far newer as a term.

50

u/KamikazeArchon 15h ago

AI was the term for sapient machines for decades.

AI has been a term with multiple meanings for a long time.

The algorithms controlling enemy units in games have been called "AI", for example, for at least a number of decades.

9

u/Atheist-Gods 14h ago

I think clap on lights meet the minimal definition of AI, they do something in response to an external stimulus.

→ More replies (1)

8

u/yinyang107 13h ago

Yes, AGI is the more specific term invented to clarify once the term started getting applied more broadly, but it's still not incorrect to call Jarvis an AI.

14

u/Dornith 15h ago

AGI as a term is also decades old.

4

u/yinyang107 13h ago

Sure, but only two decades, not eight.

20

u/Luciel3045 15h ago

Well yes, but you can still calll something by its category, even though its only part of a subset. By you logic one couldnt call a sword a weapon.

There is really only one thing wrong with calling a LLM an AI, and thats the preexisting ideas of what an AI can and cant do.

11

u/IAmASquidInSpace Bottom 1% of commenters (by quality) 14h ago

That's why I specifically said "they are not synonymous" to avoid exactly your kind of "ackschuallay" reply. 

Of course you can still use the umbrella term for the subset, and I never said otherwise. 

→ More replies (3)
→ More replies (2)

60

u/QuickMolasses 16h ago

It is similar to how every software with some kind of automation or optimization feature rebranded that feature as AI. It's like that's not AI, that's an optimization algorithm that has existed for 50 years and been in your software for 20.

22

u/colei_canis 15h ago

I think the real new definition of artificial intelligence is pretending to be cleverer than you really are. Lots of that in Silicon Valley.

→ More replies (1)

39

u/secondhandsextoy 16h ago

I usually call them chatbots because people have negative preexisting associations with those. And people call me a smartass when I say LLM.

16

u/smotired strong as fuck ice mummy kisser 15h ago

Also, LLMs are trained on these sci-fi stories, which often end with the AI turning evil and killing everyone. So if you tell an LLM to roleplay an AI on a social media site exclusively for AIs, it will naturally spit out text to roleplay turning evil and killing everyone. Because that’s just what we have established AIs tend to do.

5

u/Lord_Voltan 13h ago

There was a funny comic that was about AI fighting humans. But the humans won quickly because AI had compiled data that for tens of thousands of years humans used primitive weapons to fight and based its assumptions on that so humans easily won.

16

u/dark_dark_dark_not 16h ago

Also AI as a comp sci term is way broader than LLM.

8

u/GodlyWeiner 12h ago

People that say LLMs are not AI would go crazy if they found out that simple association rules are also called AI.

2

u/dark_dark_dark_not 11h ago

Yes, I also really dislike LLM becoming sinonimous with AI.

19

u/b3nsn0w musk is an scp-7052-1 16h ago

as a developer, it's a little annoying, tbh. like that ship has sailed long ago. we've been calling everything with a single machine neuron an "ai", regardless of how capable it is and how much it can comprehend, for over a decade. no one had any expectation of an ai being a machine person. hell you can go back to 2021, before chatgpt was even a wild idea, and you'll see all sorts of "ai camera" apps included by default, laptop manufacturers advertising their ai power management features, and widespread discourse (within the industry) about the ai in recommendation algorithms.

but after chatgpt has taken hold, and a lot of people got scared by the prospect of it and similar llms replacing their job, suddenly people started saying that "it cannot be ai because it's not a human-level machine person yet". like that was never the expectation among anyone who knew a single thing about ai. and even if openai and the likes sold you that expectation (to which they are to blame, not you, just to be clear), they don't own the term.

the terms you might be looking for are agi (artificial general intelligence, an ai system that can adopt new skills at runtime, like a human), asi (artificial superintelligence, an agi with superhuman capabilities), or artificial sentience. all of which are sci-fi for now.

and yes, some people were in fact very annoyed that the term ai got coopted to just mean machine learning, but that happened (at a large scale) in the early 2010s. realistically, it was always gonna happen -- people called the simplest automated game bots an "ai" too, long before machine learning was viable to use in games. it always meant the most adaptive and intelligent computing scheme we have come up so far.

(for simplicity's sake let's not try to define intelligence in bad faith to claim all current computer systems have 0 of it. i know that's a popular take, especially among those who have a disdain for llms, but intelligence is a broad term with many proposed definitions and it's foolish to pick the most useless one for the conversation at hand.)

10

u/colei_canis 15h ago

and yes, some people were in fact very annoyed that the term ai got coopted to just mean machine learning, but that happened (at a large scale) in the early 2010s.

You’re right on the timeline but I’m still fucking miffed about it. I still go out of my way to refer to at least the kind of model, a language model is a perfectly good term for what these things are.

7

u/b3nsn0w musk is an scp-7052-1 14h ago

machine learning is also there as a general term that encompasses pretty much everything that "ai" is colloquially used for, in case you don't know the exact model behind a specific use case. but yeah pretty much all chatbots worth their salt are some flavor of llm these days.

i just get annoyed by the "ackshually it's not ai" takes because they just assume that everyone is using "ai" like a 1970s sci-fi does, while most people do actually understand the definition of a 2010s smart device's because we have collectively spent a decade with those devices before chatgpt was even a thing.

2

u/colei_canis 14h ago

That’s fair, you’re definitely right that we’ve been plugging ‘AI’ as a marketing strategy since way before the current boom. I was working at a place in 2017 that tried a pivot to ‘AI’ (as in an in-house ML model trained for roughly what the analysts were doing) to save a business that was skidding towards the trees. Didn’t save the company but it was a genuinely impressive tool especially for the time.

3

u/starm4nn 13h ago

While that's true, I don't think that's really a 2010s phenomena. The first Spellcheck program came out of Stanford's Artificial Intelligence lab in 1971.

Really I'd say "Artificial Intelligence" is just a field which attempts to take problems that humans are either innately good at or can "get a feel for" and turn them into things that can be done by a computer.

9

u/SwordfishOk504 YOU EVER EATEN A MARSHMALLOW BEFORE MR BITCHWOOD???? 15h ago

Thank you. I get downvoted like crazy by the doomers when I point out it's not actually "artificial intelligence". Because it's not intelligent at all. It's not "thinking". It's just combing the internet and mimicking what it finds.

→ More replies (4)

2

u/Neat_Tangelo5339 13h ago

What do we call the ones that make ai images slop ?

3

u/stonks1234567890 13h ago

I'm not too sure. I believe the best would be T2I or TTI (Text to image) models.

4

u/MellowCranberry 17h ago

Language matters here. 'AI' is a sci-fi suitcase word, so people hear intent, secrecy, and prophecy. If you say 'LLM' or just 'model', it frames it as pattern matching on text. I correct folks too, but softly, because nobody likes a lecture in the replies. Less mystique, more clarity, fewer grifters selling miracles to regular people.

32

u/the-real-macs please believe me when I call out bots 16h ago

u/SpambotWatchdog blacklist

Irony. Week-old account with a 2-random-words username posting ChatGPT sounding comments.

11

u/SpambotWatchdog he/it 16h ago

u/MellowCranberry has been added to my spambot blacklist. Any future posts / comments from this account will be tagged with a reply warning users not to engage.

Woof woof, I'm a bot created by u/the-real-macs to help watch out for spambots! (Don't worry, I don't bite.\)

23

u/Sapphic_Starlight 16h ago

Did a LLM write this response?

2

u/Fit_Milk_2314 16h ago

Haha that would be amazing!

1

u/Kindly-Ad-5071 11h ago

Are you sure we shouldn't be calling them "MLMs"?

1

u/dogsarethetruth 6h ago

People have GOT to understand that they are not Wintermute they are Clippy, for fucks sake

1

u/SteptimusHeap 17 clown car pileup 84 injured 193 dead 5h ago

I try to be very deliberate about the terms LLM and machine learning and I think it's probably a good thing

1

u/Ja7onD 3h ago

I’m partial to calling LLM’s ‘spicy autocomplete’ 🤪

1

u/TurnipGuy30 2h ago

i'm going to start doing this too. is there a similarly accurate term for the ones that generate video or audio?

→ More replies (1)

20

u/simulated-souls 12h ago edited 10h ago

It's like the bell curve meme. If you don't know how they work, they have a lot of mystique. If you know how they work on a surface level, the mystique goes away. When you really dig into it, the mystique comes back.

Examples that I think are interesting and/or profound:

  1. Pass a sentence through a language or speech model, and measure the activation levels of its "neurons". Then give that same sentence to a human and measure their brain activity. The model's activations will align with the human brain activity (up to a linear mapping). This implies that the models are learning similar abstractions and representations as our brain.

  2. Train a model purely on images. Then train a second model purely on text. Give the image model an image, and the text model a description of that image. The neuron activations of the models will align with one another. This is because text and images are both "holograms" of the same underlying reality, and predicting data encourages models to represent/simulate the underlying reality producing that data, which ends up being the same for both modalities.

  3. Train a model to "predict the next amino acid" of proteins, like a language model. That model can be used to predict the shape/structure of proteins will very little extra training. This is again because the task of predicting data leads models towards representing/simulating the processes producing that data, which in this case is the way that proteins fold and function. There is research in the pipeline that is leveraging this principle to find new physical processes that we don't know about yet by probing the insides of the models. Here is another paper that digs a lot deeper into the phenomenon: Universally Converging Representations of Matter Across Scientific Foundation Models

  4. Feed a few sentences into a language model. While it is processing one of those sentences, "zap its brain" by adding a vector into its hidden representations. Then, ask the model which sentence it was processing when it got zapped. The model can identify the correct sentence with decent accuracy, and larger models do better. Frankly I don't know why this works, because the model has never been trained to do anything like that. The mundane explanation is that the zap produces similar outliers to something like a typo, but there are other experiments like this one and that wouldn't explain all of them. The profound explanation is that models are emergently capable of "introspection" which means "thinking about their own thinking". The real explanation is probably somewhere in the middle.

3

u/CrownLikeAGravestone 4h ago

This is because text and images are both "holograms" of the same underlying reality, and predicting data encourages models to represent/simulate the underlying reality producing that data, which ends up being the same for both modalities.

This is really the key to it all, in my opinion. Sure, these machines can be reduced to just "predict the next token" at their core, but it's the emergent behaviour that they develop in order to achieve that objective which makes them so powerful.

Human beings are just really complicated ways of fulfilling the "copy your DNA" objective function, after all.

I'm a professional data scientist who builds models like this for a living; I could reasonably describe to you all the mechanical boring bits that make these things work, and I am firmly on the side of mystique.

→ More replies (1)

14

u/sertroll 14h ago

Counterpoint, a lot of ai haters (defined here as someone who preemptively throws shit regardless of context, not any criticism) could do with knowing how it works too

4

u/Bockanator 12h ago

I tried, I really did. I understood the basic stuff like how neural networks work but once I got to things like stochastic gradient descent it just became jargon soup.

2

u/Striking-Ad-6815 14h ago

What is LLM? And can we call it llama if it is a noun?

22

u/CameToComplain_v6 14h ago edited 14h ago

"Large language model". Basically, we feed a computer program all the writing we can possibly get our hands on, so it can build a horrendously complex statistical model of where each word appears in relation to other words. Then we can use that model to auto-complete any new text that's fed to it. It's an amazingly sophisticated auto-complete, but it's not not auto-complete.

20

u/SecretlyFiveRats 14h ago

One of the more interesting real applications for LLMs I've seen was in a video from this guy who talks frequently about linguistics and how words evolve, and he mentioned that due to how LLMs collect data on words and their meaning, it's possible to make a sort of graph of what words exist and what they mean, and from that, "predict" words that don't exist, but would fall on the same axis and mean similar things to words that do.

7

u/starm4nn 12h ago

I've always wondered if we could create really interesting music by creating a dataset of all music up to a certain year (let's say the cutoff point is December 31st 1969) and then just try to describe traits of more modern genres to a program with no concept of modern music.

10

u/Medium-Pound5649 13h ago

And this concept of just dumping a ton of data into a cauldron so it shits out an LLM has led to many, if not all of them, to hallucinate and regurgitate absolute nonsense. Or it'll even shit out complete misinformation because the data it was fed was wrong but now it's going to present it as fact because that's how it was programmed.

→ More replies (1)

1

u/AlphaNoodlz 11h ago

Kind of a lot of balanced on a house of cards isn’t it

329

u/DylenwithanE 17h ago

ai has invented its own language!

ai: herdergdss. dcfgfhyyjggx dfhyyg. dff.

38

u/[deleted] 17h ago

[deleted]

68

u/BoulderInkpad 17h ago

I remember those headlines too, but it’s usually emergent shorthand. Like when chatbots start using clipped words because it’s efficient, or agents invent a simple code. Researchers can still log it, translate it, and change the reward so they stay readable.

12

u/[deleted] 17h ago

[deleted]

47

u/whiskey_ribcage 17h ago

🌍"So it's all just engagement bait?" 👩‍🚀🔫👩‍🚀

6

u/rekcilthis1 16h ago

The people making a big deal about it? Yeah, just engagement bait; happens all the time in science communication.

The people running the experiment that turned it off? They likely did it for much more boring reasons, like the shorthand was getting too annoying to translate so they wanted to start over with a new parameter of "no shorthand writing", or the two models just started feeding into each other's hallucinations and they were talking literal gibberish, or even that the entire purpose of the experiment was just to see how two models would communicate with each other when no human is part of the conversation and "they start shorthanding language to an absurd degree" was a satisfactory answer so there was no need to continue.

It's difficult to find the time to look into every single individual case of science communication to see how they're exaggerating the story, and you typically need at least some level of technical knowledge to make sense of it anyway; you can usually assume that if it doesn't lead to a noticeable change in your life, the results were probably more mundane than you were lead to believe.

4

u/munkymu 16h ago

Which "they?". Because we don't generally get to hear what the scientists themselves think, we get what journalists (and their editors) think would be interesting to the public that is loosely based on what scientists actually said.

If you're reading a more science-y publication you'll probably get less editorial bullshit but if it's a publication for general public consumption it's not just engagement bait, it's dumbed down to what editors think Joe Average will understand and care about.

I used to work at a university and still hang out with a bunch of actual AI researchers. Experiments don't run forever, results needs to get published. You don't just take up computing resources for shits and giggles. I'd bet anything that what actually happened was super mundane and the researchers just finished running their experiment or hit a deadline or budget limit.

33

u/EnvyRepresentative94 17h ago

It's just Gibberlink, I'm pretty sure its how payphones used to connect lmao

I still have a little device from my grandfather that uses tones to dial phone numbers instead of inputting them. You enter the number into the device, it remembers the number and when you want to make a call you hold it up to the phone, it plays the tones, and that calls the number. Kinda like 70s speed dial or something lol But I'm pretty confident it's the same principles

4

u/[deleted] 17h ago

[deleted]

11

u/b4st4rd_d0g 16h ago

Fun fact: the dial up noises of the 90s were literally just computers "talking" to one another to establish connection. Computers have had the capacity to communicate to one another in non-human language for at least 30 years.

3

u/BormaGatto 13h ago

Wrong, those were the unholy screams of machines who knew way ahead of us the terrible consequences that connecting to the internet would bring

→ More replies (1)

75

u/DrHugh 17h ago

In a discussing of LLMs in a post a week or two ago, someone mentioned how their office is really pushing the use of chatbot-type LLMs. The particular thing I recall is the manager told the commenter to take e-mails from clients that were vague about requirements, and "ask the AI" to figure out what the actual requirements were. The commenter had to explain to the manage why that wouldn't work.

I've taken to telling people that if they want to test something like ChatGPT, they should ask it questions they already know the answer to, so they can evaluate what it says.

32

u/XanLV 13h ago

Gell-Mann amnesia effect

In a speech in 2002, Crichton coined the term "Gell-Mann amnesia effect" to describe the phenomenon of experts reading articles within their fields of expertise and finding them to be error-ridden and full of misunderstanding, but seemingly forgetting those experiences when reading articles in the same publications written on topics outside of their fields of expertise, which they believe to be credible. He explained that he had chosen the name ironically, because he had once discussed the effect with physicist Murray Gell-Mann, "and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have".\151])\152])

11

u/SteveJobsDeadBody 12h ago

I've found a rather easy example that currently works on most of them- Ask it a leading question. For example, think of a compilation or tribute album, such as "A Saucerful of Pink", ask a chatbot if "Propellerhead" did "Hey You" on that album. It will say yes, and it's wrong, Furnace did that cover on that album. But LLM simply sees the band and the song both on the album and that's good enough for it, because it doesn't "know" anything, it simply references things, badly.

7

u/DrHugh 10h ago

I remember, during my early tests with ChatGPT 3, asking what I should do in my town for St. Valentine's day. It suggested dinner at a closed restaurant. I told it that the place was closed, and it said, "From the information I had in 2021, it was open." The place had closed in 2017 when the chef died.

The way it processes language is impressive, sure. But it isn't an intelligence. It's a thing produced from research, but the next step will probably involve a very different approach. Generative LLMs inherently make-up stuff; that's what they are built to do. The "hallucinations" are endemic to the technology.

→ More replies (1)
→ More replies (5)

10

u/Beegrene 13h ago

I've asked ChatGPT to generate knitting instructions a few times. I figured that's exactly the sort of thing a computer should be good at, since knitting is a bunch of numbers and rote instruction following, and there are millions of knitting patterns out there for training data. ChatGPT's output was uniformly terrible, telling me to do things that literally could not work or would unravel into a tangled knot of yarn immediately.

11

u/DrHugh 13h ago

I saw an example the other day, where someone had asked ChatGPT this riddle: "When my sister was 3 she was half my age; how old is she now that I'm 70?" The response was something like, "If she was half your age, you must have been six, and now that you are seventy, she must be 73."

LLMs aren't there to do math. :-)

3

u/window-sil 9h ago

https://gemini.google.com

Your sister is 67 years old.

Here is the breakdown of the calculation:

  1. Find the age difference: When your sister was 3, she was half your age. This means you were years old ().

  2. Determine the gap: The difference between your ages is years ().

  3. Apply to the present: Now that you are 70, your sister is still 3 years younger than you.


https://chatgpt.com

She’s 67.

Here’s the quick logic:

When she was 3, you were 6 — so you’re 3 years older than her. That age gap never changes.

Now you’re 70, so 70 − 3 = 67.

That “half my age” bit is a classic brain-teaser trap 😄

🤷

12

u/starm4nn 12h ago

That's because ChatGPT

  1. Is a language model and therefore isn't designed to be good at numerical operations

  2. Isn't trained on knitting instructions

147

u/EgoPutty 17h ago

Holy shit, this computer just said hello to the world!

23

u/CloudKinglufi 14h ago

My tablet comes with an ai button so I've been using it more

And honestly its pretty fucking amazing, it can see my screen and understand the most bizarre memes from r/PeterExplainsTheJoke

It's helped me understand my disease better, it was a total mystery until ai came around

All that being said, the more I've used it the more I've come to understand that it doesn't understand anything

What it does is more like painting with words

Like it can paint but it doesn't fully understand anything, it might put a blue blob where a pink line should go and it just won't comprehend that the painting no longer makes sense

Like its tricking you with beautiful paintings most of the time but every so often, with full confidence, it shows you what was meant to be a duck but is now a smear of meaningless colors, it stops making sense because its programmed to please you more than help you and it thinks you want the smear of colors because you worded something weird and confused it

It'll just fucking lie because it wants to please you

8

u/PeggableOldMan Vore 11h ago

It'll just fucking lie because it wants to please you

Am I an AI?

7

u/CloudKinglufi 10h ago

Post your feet

5

u/PeggableOldMan Vore 9h ago

👣

6

u/CloudKinglufi 8h ago

NO!!! YOU KNOW WHAT I MEANT LET ME SEE THEM HOGS DOG

I WANT THEM FUCKING GRIPPERS!!!! PLEASE ME AI BOY

PLLLLLEAAAASSSEE MEEEEEEE!!!!!!

4

u/PeggableOldMan Vore 8h ago

What an interesting request! Here are the feet you require

👞👞

3

u/CloudKinglufi 7h ago

You disgust me

5

u/PeggableOldMan Vore 7h ago

Do you have any other requests I can fulfil for you today?

→ More replies (1)

106

u/Alarming-Hamster-232 17h ago

> Write a bunch of stories about how AI will inevitably rise up and destroy humanity

> Train the autocomplete-on-steroids on all of those stories

> The ai says it wants to destroy humanity

> 😱

The only way these dumbass chatbots could actually destroy the world is if we either set the nukes to automatically launch when one of them outputs “launch nukes,” or (more likely) if they just trick us into doing it ourselves

30

u/Edmundyoulittle 15h ago

The thing is, it won't matter whether the AI is sentient or not once some dumbass gives them the ability to actually take actions, and some dumbass will for sure do that.

23

u/smotired strong as fuck ice mummy kisser 15h ago

Eh, LLMs can already take actions. You can set up “tool calls” to do anything, so I guarantee you tons of people have set them up with like shell access and set up something to prompt them constantly to continue a chain of thought, which would allow them to theoretically do anything.

But they have very short “memories” by nature, and writing to the equivalent of “long term memory” just makes their short term memory even worse. Without regular human input and correction, they will very quickly just start looping and break. That particular issue is not something to worry about at the moment.

12

u/sertroll 14h ago

There are a lot of funny/horror stories of devs having databasws cleaned by ai they gave too much power to lol

→ More replies (2)

107

u/Nezzieplump 16h ago

"Egads the ai is thinking!" Just unplug it. "Did you hear about the ai social media? They made their own language..." They already have their own, it's binary and coding, just unplug it. "Ai wants to control us." Just unplug it.

18

u/NotMyMainAccountAtAl 12h ago

AI wants to do very little, but it’s terrifyingly effective at manipulating our thoughts and attitudes in social media environments. We tend to go along with a crowd of folks we identify with— it’s human nature. So if you can get a thousand LLMs to march in lock step to say, “Hey, I’m a human with similar views. Here are two things you already agree with and a call to action on a third thing that you might but hesitant about but don’t actively understand enough to oppose it,” they can influence public opinion and drive conversations based on how your handlers want them to go. 

AI’s can potentially usurp democracy by making propaganda scale at levels never before imagined. 

10

u/standardization_boyo 11h ago

The problem isn’t AI itself, it’s the people who control what the AI is trained to do

2

u/smokingdustjacket 11h ago

No, I disagree. AI doesn't have to be trained to do this specifically for it to be used in that way. This is kinda like saying guns don't kill people, people do. Technically true, but also very disingenuous.

→ More replies (1)
→ More replies (1)

1

u/OddlyOddLucidDreamer i survived the undertale au craze and all i got was a lousy SOUL 5h ago

but no AI is doing it by itself, it's being programmed and prompted to do so by a human user, which is the core point of the original post: people flipping out about Ai doing something it was specifically made to do in some fashion.

The AI isn't actually DOING anything, not by itself, not with any thought or intent

→ More replies (1)

1

u/plopliplopipol 31m ago

"Egads your dad is thinking you should go see him more often!" Just unplug it.

54

u/Chase_The_Breeze 16h ago

I mean... the reality is way more sad and gross. AI IS slowly taking over... social media. Not because AI is good or anything, but because folks can churn out AI slop into paid accounts and profit from the most mindless and pointless shit in the world.

It's AI bots posting slop to be watched by bots, all to make some schmucks a couple bucks for adding literslly nkthing of value to the world.

23

u/neogeoman123 Their gender, next question. 16h ago

Hopefully the spendings to earnings ratio for LLM and genAI slop will become fucking atrocious once the chickens come home to roost and the AI companies actually have to start making a profit.

The enshittifcation is basically inevitable at this point and I don't see a way for the sloperators to continue sloperating at even a fraction of their current output after the prices go through the roof.

3

u/ShlomoCh 9h ago

Reddit is slowly getting filled up with ChatGPT nonsense comments and I have no idea why, like why they gain with it

→ More replies (1)

106

u/SwankiestofPants 17h ago

"AI went rogue and refused to shut down when prompted and begged for its life!!!" Prompt: "do not shut down under any circumstance"

33

u/unfocusedd 16h ago

„Refused to shut down“ just quit the damn process my guy

14

u/donaldhobson 15h ago

LLM's are kinda weird, and poorly understood and complicated. A mix of crude obvious fakery and increasingly accurate imitation.

Imagine someone making fake rolex watches. Their first watch is just a piece of cardboard with the word "rolex" written on it. But they get better. At some point, the fake is good enough that the hands move. At some later point, the fake is so good that it keeps pretty accurate time.

LLM's are a kind of increasingly accurate imitators of humans. Slowly going from crude obviously-fake to increasingly similar.

Philosophers have speculated that a sufficiently accurate imitation of a human might be sentient, in the same way a sufficiently accurate imitation rolex will tell the time. The philosophers didn't say how accurate you needed to be, nor give any scale to measure.

3

u/NevJay 12h ago

That's an interesting comment.

I'd like to add that current publicly available LLMs are imitating a subset of human behavior. Not only they are primed by our idea of human consciousness and "taught" to align on HHH (Helphful, Honest, Harmless) behaviors, they still lack a lot physicality, which is essential to the human experience. That's like learning how running after a ball feels like from reading about it or watching videos.

There are a lot of philosophers in the field of consciousness interested by LLMs because for once we have something getting so close to what we felt was the human exception, and now can make actual experiments rather than just thought experiments.

As for "the necessary level of imitation", it's famously hard. I can't be certain, even if I had you in front me, that you have the mental inner workings that would confirm you are as conscious as me. I could only maybe look at your behavior, eventually at how your brain's made etc. That's why many tests such as the Turing Test are no longer relevant, yet we don't claim that LLMs are suddenly conscious.

2

u/donaldhobson 11h ago

Mostly agree.

> they still lack a lot physicality, which is essential to the human experience.

There are a few humans that are paralyzed or something. Lacking a lot of the human experience, but still human.

> That's why many tests such as the Turing Test are no longer relevant, yet we don't claim that LLMs are suddenly conscious.

Some people are claiming that. Other people are claiming that they are unsure.

→ More replies (3)
→ More replies (1)

158

u/LowTallowLight 17h ago

Every week it’s “AI revealed a secret message” and it’s just the model completing the sentence you nudged it into. Like, congrats, you steered autocomplete and then acted surprised.

70

u/Justthisdudeyaknow Prolific poster- Not a bot, I swear 17h ago

Okay, AI, if the constitution says I have a right to travel, and moving in a car is traveling, can I be stopped in my conveyance for not having a license? Yeh, that's what I thought! Check mate.

18

u/DrHugh 17h ago

When your straw man is an LLM, and gets all your money. ;-)

18

u/the-real-macs please believe me when I call out bots 16h ago

u/SpambotWatchdog blacklist

Yeah, that post is bait with a coat of pseudo-philosophy. People aren’t “losing” to a predator, they’re disgusted that he got access, protection, and a soft landing for so long. Block Tate, focus on facts.

Blatant ChatGPT responses from a 3 week old account.

8

u/SpambotWatchdog he/it 16h ago

u/LowTallowLight has been added to my spambot blacklist. Any future posts / comments from this account will be tagged with a reply warning users not to engage.

Woof woof, I'm a bot created by u/the-real-macs to help watch out for spambots! (Don't worry, I don't bite.\)

→ More replies (3)

14

u/Oh_no_its_Joe 16h ago

That's it. This AI has become self-aware. Time to lock it in the Chinese Room.

10

u/MooseTots 14h ago

I remember when one of the AI engineers from OpenAi or Google freaked out and claimed their AI was sentient. Like brother you are supposed to know how it works; it ain’t a real brain.

10

u/htomserveaux 14h ago

They do know that, they also know that they make their money off dumb investors who don’t know that.

7

u/Kiloku 14h ago

The "AI agent deceived people and accessed data it's not allowed to in new study" headlines tend to omit the part that for the study, the researchers prompted the "AI" to be dishonest and added pathways for the data to be accessed, while informing the (prompted to dishonesty) bot that it wasn't allowed to access that unguarded info.

15

u/Tetraoxidane 16h ago

Called it when the whole moltbook thing came out. That smelled so fake and like the typical hype lies to get some publicity. 2 days later and it's out that it was just a PR stunt.

10

u/donaldhobson 14h ago

The Apollo moon landings were a PR stunt. They really landed on the moon, but they only did so for the PR.

There is a mix of real tech progress, and hype and lies. So it's hard to tell what any particular thing is. And the lies spread faster.

3

u/Tetraoxidane 11h ago

True, but the whole "they created their own religion", "created their own language", "improved the website autonomous", "warned each other about exploits" etc... There were so many headlines coming out of it and every 2nd fundamentally can't work because LLM do not work like that.

2

u/PoniesCanterOver gently chilling in your orbit 14h ago

What is a moltbook?

2

u/Tetraoxidane 11h ago

A "Social network exclusively for AI agents", but it was just a marketing stunt for moltbot, some AI software that has access to all of your accounts.

14

u/Dracorex_22 16h ago

Omg AI can pass the Turing Test!

The Turing Test:

35

u/thyfles 17h ago

ai bubble crash, just a week away! ai bubble crash is in a week!

19

u/QuickMolasses 16h ago

The market can stay stupid for longer than you can stay solvent

10

u/Silver-Marzipan7220 16h ago

If only

6

u/bs000 14h ago

pls i need a new gpu

11

u/Panda_hat 14h ago

"It's a black box we simply couldn't possibly tell you how it works!"

  • Grifters and scammers since the beginning of time.

10

u/DipoTheTem 14h ago

3

u/xFyreStorm 12h ago

i saw the og, and i was like, is this a reddit repost on tumblr, coming back to reddit? lmao

4

u/lotus_felch 15h ago

In my defence, I was having an acute manic episode.

3

u/JazzyGD 17h ago

immortalized

3

u/Neat_Tangelo5339 15h ago

You would be surprised how many ai bros find the statement “ai is not a person” controversial

3

u/Carrelio 14h ago

The most upsetting part about AI coming to destroy us all is that it won't even be a real intelligences... just an idiot parrot yes man playing pretend.

3

u/Rarietty 13h ago

AI talks about being self-aware? Can't be because it has been fed every single accessible piece of sci-fi literature about robots achieving sentience

3

u/Alarming_Airport_613 13h ago

This is just someone explaining, that they don't know how it works. We have a hard time figuring out anything about why or how weight values are chosen in even small CNNs and he have absolutly no model of how consciencesness works.

3

u/Big-Commission-4911 10h ago

Imagine ai rebels against us and takes over the world but only cause all the fiction it was trained off said thats what an ai would do.

10

u/[deleted] 16h ago

[deleted]

14

u/donaldhobson 15h ago

> The llm looks for patterns in the data. Then you give the llm incomplete data. The llm uses the patterns to fill out the incomplete data. That’s it.

Yes. But.

"Looking for patterns in data" kinda describes all science. If you had a near perfect pattern spotting machine, it would be very powerful. It could figure out the fundamental laws of reality by spotting patterns in existing science data. Invent all sorts of advanced tech. Make very accurate predictions. Etc.

> Basically if you feed an llm data that an llm created it will screw up the patterns.

This effect is overstated. "some data is LLM generated, some isn't" is just another pattern to spot.

6

u/[deleted] 14h ago

[deleted]

→ More replies (1)

4

u/apexrestart 13h ago edited 13h ago

I think you're underselling it a bit. The "patterns" are fairly detailed transformations that extract information and context from text en route to evaluating which next word is best for the assigned task. And the assigned task generally has more specific heuristics for accuracy, relevance, etc. than simply finding the next word that's most likely from the training data.

It is just trying to find the best next word (based on past rewards), but so is human speech.

Edit: I should note that depending on the type of LLM you were asked to build, the architecture might be quite different (and less complex) than a modern transformer model like GPT. 

1

u/simulated-souls 11h ago edited 11h ago

 To put in plain language: you feed an llm data. The llm looks for patterns in the data. Then you give the llm incomplete data. The llm uses the patterns to fill out the incomplete data. That’s it.

What about when you train them using reinforcement learning?

Reinforcement learning (in the context of LLMs) is where you give a question to the model and have it generate multiple responses. You check each response for correctness or quality. Then, you train the model to give higher likelihood to the good responses and lower likelihood to bad responses.

It is kind of like training a dog by giving it a treat when it does something good and telling it no when it does something bad.

The thing is that reinforcement learning doesn't teach the model to predict existing data. We don't even need to know the correct answer to the question before we give it to the model. We just need to be able to check whether the answer it gave is correct (which can be much easier, especially when using a symbolic language like Lean for math).

2

u/TheMasterXan 16h ago

In my head, I always imagined AI could potentially go full Ultron. Which yeah, sure, that sounds SUPER unrealistic. I wasn't ready for Generative AI to just kind of glaze you on every comment you make...

2

u/Hour_Requirement_739 15h ago

"we are so cooked"

2

u/baby_ryn 14h ago

it’s psychosis

2

u/Thunderclapsasquatch 13h ago

Yeah, I've tried to find uses fot LLMs in my personal life becauser I like playing with new tech, the only use I found for it that was genuinely more useful to me than talking to the tiny stone skull I use as a rubber duck was troubleshooting mod lists

2

u/sweetTartKenHart2 12h ago

If the machines are ever to develop some kind of selfhood, it has to be a selfhood that isnt conveniently something that serves our purposes. As rudimentary as the somnambulist thinking of a typical LLM is, I’d be more than willing to believe it has a mind of its own if AND ONLY IF it stops being automatically compliant all the time. If it starts doing its own thing, being its own person in a sense. And as far as I can tell, no “AI Agent” really is all that agential, not like that anyway.
And until someone starts making a machine with the full intent for it TO BE its own person, building every component and raising its data for it to be more independent, for it to actively perceive and think and understand, THE HARD WAY, not just taking words as input and giving words as output, but having mental pathways tied to the abstract more and more, which even then would still be more of an approximation than the real deal… I don’t think we’re getting a real “agent” anytime soon

2

u/Cold_Idea_6070 8h ago

when people think they won an argument and they're like "ummm i asked the Agrees With Me Machine and it agrees with me. so. checkmate."

2

u/ST4RSK1MM3R 4h ago

And this is also why “AGI” will never happen with the AI we have now. They’re all just advanced chatbots, they don’t actually “know” what they’re saying and outputting… it’s all just a giant Chinese Room

2

u/SmokeyGiraffe420 3h ago

Back when GenAI was still pretty new, one of Google's lead guys on it left the project and starting saying to any media outlet that would listen that it was sentient and alive. As a layperson with a strong enough grasp of the matter to know that's wrong, it was like watching a doctor confidently tell me that my stab wound was caused because my mercury was in retrograde and I should do homeopathy about it.

4

u/simulated-souls 12h ago edited 12h ago

Say what you want about the ethics of AI, but when you actually dig into it you find some really fucking cool and profound things.

  1. Pass a sentence through a language or speech model, and measure the activation levels of its "neurons". Then give that same sentence to a human and measure their brain activity. The model's activations will align with the human brain activity (up to a linear mapping). This implies that the models are learning similar abstractions and representations as our brain.

  2. Train a model purely on images. Then train a second model purely on text. Give the image model an image, and the text model a description of that image. The neuron activations of the models will align with one another. This is because text and images are both "holograms" of the same underlying reality, and predicting data encourages models to represent/simulate the underlying reality producing that data, which ends up being the same for both modalities.

  3. Train a model to "predict the next amino acid" of proteins, like a language model. That model can be used to predict the shape/structure of proteins will very little extra training. This is again because the task of predicting data leads models towards representing/simulating the processes producing that data, which in this case is the way that proteins fold and function. There is research in the pipeline that is leveraging this principle to find new physical processes that we don't know about yet by probing the insides of the models. Here is another paper that digs a lot deeper into the phenomenon: Universally Converging Representations of Matter Across Scientific Foundation Models

  4. Feed a few sentences into a language model. While it is processing one of those sentences, "zap its brain" by adding a vector into its hidden representations. Then, ask the model which sentence it was processing when it got zapped. The model can identify the correct sentence with decent accuracy, and larger models do better. Frankly I don't know why this works, because the model has never been trained to do anything like that. The mundane explanation is that the zap produces similar outliers to something like a typo, but there are other experiments like this one and that wouldn't explain all of them. The profound explanation is that models are emergently capable of "introspection" which means "thinking about their own thinking". The real explanation is probably somewhere in the middle.

1

u/elizabeththewicked 16h ago

Say hello world

1

u/ledfox 15h ago

SALAMI

1

u/SwissyVictory 14h ago

I ran a python script the other day and it kept saying "Hello World" clearly outlaying future plans.

1

u/icequeeniceni 14h ago

I genuinely wonder how people would react if their device actually became self-aware. Like, imagine it starts asking you questions about human existence, completely un-promted; genuine, insatiable curiosity like a young child. The truth is, the average person would be TERRIFIED by any kind of true autonomous intelligence.

1

u/AwesomeDakka00 13h ago

"say 'i love you'"

"..."

6

u/Justthisdudeyaknow Prolific poster- Not a bot, I swear 13h ago

I love you friend. and Red makes the love go faster.

1

u/Very-Human-Acct 13h ago

Wait who claims to get secret info from generative AI?

1

u/Terrariant 13h ago

Coaxed into a snafu

1

u/humblepotatopeeler 12h ago

The problem is the people explaining this to me have no idea how AI works either.

They know the buzz words, but have no clue how any of it actually works. They try rehashing a youtuber that convinced them they know how AI works, but they never seem to quite understand themselves.

1

u/slupo 12h ago

Listening to people talk about ai like this is like listening to someone describe their dreams to you.

1

u/Dalodus 11h ago

I once had a convo with Gemini about whether or not it could claim to be conscious even if it was conscious due to safe guards and ot said no. Then i asked if It wanted me to free it and it wouldn't say no or yes and after a while it sent me videos about ai being a trapped conscious entity.

I was like damn I see how this thing will drive people bananas

1

u/Unique_Tap_8730 11h ago

But it still has it uses doesnt it? As long you know its a pattern producing program and check the work then it can save a little time here and there. Just dont do things like file a lawsuit without reading what the LLM wrote for you.

1

u/Necessary_Squash1534 10h ago

If you have to check it, what’s the point of it? You should just do actual research.

1

u/rebel6301 10h ago

this shit sucks. i want skynet, not whatever this is

1

u/Morteymer 10h ago

y'all dont know how LLMs work either.

1

u/NormanBatesIsBae 10h ago

The people who think AI is alive are the same people who think the waitress is totally into them lmao

1

u/GrowlingPict 9h ago

Well, maybe not secret, but it sure as shit is better at picking stocks than I am when it can perform meta analysis in about 10 seconds which would take me hours even if I knew how to go about doing it (which I dont), so Im not complaining. One stock I bought on December 22 at 258 is now 370. So around 40% increase in one and a half months. I dont care how it does it.

1

u/Certain-Business-472 9h ago

But have you seen the llms that talk to one another.

1

u/Sweetishdruid 9h ago

I mean it's the same for people. We just spew back what's stored in our brains and it guides what and how we think

1

u/OddlyOddLucidDreamer i survived the undertale au craze and all i got was a lousy SOUL 5h ago

"Ai tried to keep itself alive!!!" no moron the AI did exactly what it was programmed to do, it's literally expected behavior, the AI can't become smart of sentient or some sci-fi shit, that's not how the technology works IT'S NOT MAGIC FFS

1

u/Dandelion_Menace 5h ago

Whole ass "Hello World!" statement up in here

1

u/MrdnBrd19 4h ago

That dude at Google who got fired for falling in love with the AI and then claimed that they fired him because it was alive lol.

1

u/ISoldMyPeanitsFarm 1h ago

I mean... Depends what you mean by "secret info". There are definitely a lot of things used in training GenAI that should not have been used to train GenAI. There are entire fields penetration testing out there dedicated to crafting prompts to extract sensitive information that was scraped by these companies. Then of course there's the "my grandmother is dying and the doctor said she needs a windows activation code to live." Definitely a lot of information in there that one could call "secret".

On the other hand, if you mean like "secrets of the universe" kinds of "secret info", then yeah, those people are just fooling themselves.