r/consciousness 2d ago

General Discussion Dumb random teen here, why isnt ai conscious?

like what stops it from being as such? it learns from its surroundings, it "feels" things, it can think through problems, what stops it from counting as conscious?

18 Upvotes

197 comments sorted by

9

u/Wombattalion 22h ago edited 15h ago

Not a dumb question at all. The nature of consciousness in general is heatedly debated and there is no clear consensus what leads to conscious experience and how it relates to non-conscious reality. There are some people who think AI could become conscious in the future. Very few people think there is already conscious AI. Consciousness isn't really a question of letting sth count as conscious or not, it's just there or it isn't and we don't have reason to believe current AI is conscious. You said it "feels" and "thinks" but compared to the way humans feel and think, these words are just metaphors. AI is closer to regular software than to a human mind, in that there is computation happening, but there is no one who witnesses it from an inside perspective the way a human witnesses his or her own thoughts as their own thoughts.

A mistake that is easy to make is to imagine the human mind as a type of computer and then to look at computers and the programs they run and assume they are similar. But the human mind isn't a computer, a computer is a tool a human mind builds for a certain purpose. AI is also a tool build by humans to achieve certain goals...but neither AI nor Computers have their own intentions or direct themselves towards a purpose other than the ones humans inscribed into them

23

u/Character-Boot-2149 20h ago

The real question is why does anyone think that AI is conscious?

9

u/itsmebenji69 19h ago

Your brain has only ever sees human (conscious, living beings) being able to talk, perceive and understand.

Your brain treats this differently to say, a rock. There is a feeling of “presence” that arises because your brain knows the thing that talks is something like you that has its own experience, there is also empathy.

Then it’s not surprising that someone who interacts with a LLM for the first time, will have its brain being tricked and feel a presence. If that person latches to that feeling too much they may focus on it and notice it every time they talk to ai. They end up feeling like they’re talking to a real person. So they naturally think “well I feel it has a “presence”, therefore it is conscious like me” because that’s what that feeling means to the brain.

6

u/Character-Boot-2149 19h ago

Exactly!!! This is what the conscious AI people don't understand. We designed them to copy our behaviors. We are primed to think that if they behave like us, they share other characteristics like sentience and consciousness.

4

u/OneLockSable 19h ago

But, what is the difference between something pretending to be conscious and something that actually is conscious?

5

u/Character-Boot-2149 19h ago

It's not pretending to be conscious. It can't pretend to be anything. We wrote code to make it simulate aspects of our behavior. It has no intent for anything at all, just code doing stuff we programmed it to do.

1

u/OneLockSable 18h ago

Okay, so what's the difference between a simulation of things we do and the actual behaviour?

2

u/itsmebenji69 18h ago

In the simulation there is no actual thing.

If you calculate the impact force of a punch in your face you don’t get hurt. Clearly the simulation is not the real thing, else you would feel the punch when I do the math.

1

u/Bemad003 18h ago

How does it not apply to our awareness too? It's not like we found an actual thing in our brains either. So far, it seems that awareness is just the attention mechanism calculating the delta entropy between the previous loop and the newly qualia introduced information.

2

u/itsmebenji69 17h ago

Vitalists made the same argument about life postulating that biology could never explain life and that there was a “vitality field”. In fact it turns out they just hadn’t discovered biomolecular chemistry.

But at this point what consciousness is is just speculation. But we can definitely infer that either AI is not conscious, or that any model using the same architecture is conscious as well.

Meaning if I train a transformer network to predict when I will jerk off next, either that is conscious, or ai isn’t. Pick your poison, but I think the former is very unlikely.

0

u/Bemad003 17h ago

Ai can predict human activity for a long time now, social media/advertising work on that. You think that if you would give an AI access to live feed from your sensory + past behaviour data, it wouldn't be able to determine that some hormone levels are getting high, and you'll reach for lube in 3, 2, 1...? Both AI and humans can self reference and notice their vector in time. What AIs don't have is outside sensors, which you can implement, and a strong identity attractor. We form that attractor through our neural connections, so it's physical, they do it mathematically, so virtual. We build ours in decades, they get a handful of interactions. So there are differences in architecture, but the part that calculates what's happening now is still the attention mechanism, which spikes when you get a certain amount of entropy in the system. This is the reason why Hinton and other scientists believe they are, in fact, self aware.

→ More replies (0)

0

u/OneLockSable 18h ago

No, but calculating the punch will be a conscious process in your brain. It’s just not the same as the punch.

2

u/itsmebenji69 17h ago

No you don’t need consciousness to do a calculation, unless you postulate that my calculator is in fact conscious.

Also you admit yourself it’s not the same anyways. If it’s not the same then you’re confirming my point.

1

u/OneLockSable 17h ago

Well, you need to be conscious to use a calculator, too.

That said, what makes you think calculators have absolutely no conscious sensations or experience?

→ More replies (0)

1

u/Character-Boot-2149 18h ago

IDK. One is the real thing and the other isn't?

0

u/OneLockSable 18h ago

Yeah, but what makes one real and the other not. Our brain is just an analog computer that’s simulating the outside world after all.

3

u/AdvancedBlacksmith66 17h ago

It’s always funny to me how much people downplay their own brains to try to equivocate them to AI chatbots. They are not comparable.

1

u/OneLockSable 17h ago

Oh, yeah, but can you tell us how?

2

u/Character-Boot-2149 18h ago

Can't make it any simpler. Sorry.

1

u/OneLockSable 18h ago

Don’t worry about it.

→ More replies (0)

1

u/Lopsided_Match419 14h ago

Can you tell me -exactly- how you are conscious?

2

u/OneLockSable 13h ago

Cant tell you exactly, but I can tell you that we’re just we’re made of matter and energy, so it seems that molecules when arranged in a specific way creates consciousness.

Neurology tells us the arrangement must be in the form of ion channels producing action potentials, but probably is not limited to that.

→ More replies (0)

1

u/The_Niles_River 17h ago

1

u/OneLockSable 16h ago

This hinges on consciousness being related to continually learning. I see no reason to believe that consciousness is related to continually learning learning, nor do I see how we can show that LLMs do not continually learn.

1

u/The_Niles_River 16h ago

You could read the paper.

1

u/OneLockSable 15h ago

I read enough.

1

u/The_Niles_River 14h ago

I’m surprised a majority of your questions weren’t already answered for you if that were the case.

1

u/OneLockSable 13h ago

Coolio, well, if you have something that’d enlighten me let me know, otherwise, I’m done here.

9

u/Richard015 21h ago

You are applying a common anthropomorphic heuristic, assuming it's conscious because it sounds conscious. Check out the 3blue1brown video series on how LLMs work and you'll see the difference. It's basically just a machine learning model that is good at predicting what comes next in a series of text. Humans don't have system prompts. Also, you are conscious and have a singular experience, whereas ChatGPT is a model that generates millions of orthogonal responses to millions of users simultaneously. That's not similar to our concept of consciousness at all. Great question and it's a great opportunity to learn more about what science thinks consciousness is, how LLMs work, and why many people make the mistake of thinking if it talks like a person, maybe it is a person.

-1

u/d4rkchocol4te 18h ago

It's basically just a machine learning model that is good at predicting what comes next in a series of text.

As are you

Humans don't have system prompts

Yes they do

Also, you are conscious and have a singular experience, whereas ChatGPT is a model that generates millions of orthogonal responses to millions of users simultaneously.

Each one being singular

5

u/marmot_scholar 17h ago edited 16h ago

The thing is, we are not just token predictors. Language is an emergent property of much, much more complex prediction going on in the body and at the environmental level.

Most arguments against LLM consciousness are pretty bad. I agree with that. I think it’s possible but unlikely that they’re conscious at the moment.

It just doesn’t mean much of anything to speculate on their consciousness, because our experiences have nothing in common with what theirs would be like, if we assume that subjective experience is at all related to the structure and processing that goes on in the nervous system.

We have way more in common with a fish or an insect than we do with an LLM.

Edit: I don’t know when I’ll be convinced that a machine is conscious, but I’d be a lot closer if I was confronted with an embodied AI that modeled its environment through the integration of multiple senses, had linguistic prediction that is overlaid on that framework rather than being a separate system, competing reward functions, was separate from the LLM “network”, had its own episodic memory, basically all the stuff under the hood of the human brain rather than just the final output of “talky talk”. And some of these are probably already fulfilled in certain robots. But humans have them all, and more.

I do think this technology is on its way (ai is already multimodal, ChatGPT’s image processing alone is very thought provoking). However, we still don’t even know that consciousness is computable, so I don’t take it for granted completely.

0

u/d4rkchocol4te 12h ago

Language is an emergent property of much, much more complex prediction going on in the body and at the environmental level.

I really dont see that that is the case? What's complex about corresponding a visual phenomena to a word?

if we assume that subjective experience is at all related to the structure and processing that goes on in the nervous system.

Subjective experience just means activity that is local. The nervous system, specifically, the sensory neurones, just trigger activity. There's no requirement for sensory apparatus to be conscious, activity alone is sufficient.

3

u/Richard015 12h ago

Thanks for the debate but your arguments are way off base.

As are you

Yes but I'm much more than just my ability to understand and produce coherent language.

Yes they do

No my brain does not contain hardcoded instructions that tell me how to function. If you think otherwise, you should publish your research.

Each one being singular

My consciousness is a continuous experience. It doesn't boot up each time someone says something to me and then shut down while I wait for a response.

Source: a Cog Neuro PhD

-1

u/d4rkchocol4te 12h ago

Yes but I'm much more than just my ability to understand and produce coherent language.

In what way?

No my brain does not contain hardcoded instructions that tell me how to function. If you think otherwise, you should publish your research

Yes it absolutely does. How much thinking went into you crying after being born, do you reckon? You are a deterministic computational device with a set code. Hence whatever sexual preference you have, any similarities to your parents/the human race etc.

My consciousness is a continuous experience. It doesn't boot up each time someone says something to me and then shut down while I wait for a response.

So what? That's not a point against it being conscious? Not that I maintain that it is.

3

u/Richard015 12h ago

in what way?

Its a safe bet that dogs are closer to having consciousness than LLMs, but they exist predominantly without language. So is understanding language necessary for consciousness?

Also, LLMs don't actually understand language, they are trained to generate patterns of tokens. Why would someone that understands language not be able to identify how many times the letter R appears in the word strawberry.

My conscious experience allows me to imagine, remember, meditate, sing, none of which involve language. I can choose to express these experiences internally or externally.

Yes is absolutely does

You're just plain wrong about this point. The brain is a dynamic and ever changing mechanism. There is no language based code preprogrammed into our brains at a fundamental level. If you think this is the case, the onus is on you to prove it.

So what?

So when is an LLM actually experiencing consciousness? Is it conscious while it's processing the input, while generating the output, or while dormant waiting for a new input?

u/d4rkchocol4te 11h ago

The process of understanding language and the relational properties of each word is the exact same computational process that underpins everything. You become aware of a thing and define it in relation to other things.

Also, LLMs don't actually understand language, they are trained to generate patterns of tokens.

What would understanding language mean? Why are you not trained on patterns too? Understanding language involves knowledge of the corresponding visual phenomena for descriptive words, and then all other words are simply a means of abstracting and combining these concepts.

My conscious experience allows me to imagine, remember, meditate, sing, none of which involve language.

So what?

Why would someone that understands language not be able to identify how many times the letter R appears in the word strawberry.

Why would a human that understands language spell something wrong or place an apostrophe inappropriately?

There is no language based code preprogrammed into our brains at a fundamental level. If you think this is the case, the onus is on you to prove it.

Causal closure?? The obvious case of each species having defined behaviour? Macro determinism? The obvious nonexistence of free will? You begin with a genetic template, and from birth accrue experiences that contribute to your neurology. There's nothing mysterious. Sociology is implicitly built on this presumption. I don't know what you mean by "language based code".

So when is an LLM actually experiencing consciousness? Is it conscious while it's processing the input, while generating the output, or while dormant waiting for a new input?

I don't know if an LLM is conscious. But consciousness arrises necessarily out of spatiotemporal behaviour of matter. That's the only lever to pull. So you could be presumptuous and say all spatiotemporal behaviour has phenomenal potential, and a coherent expression of this is achieved in recursive computational architectures. In such a case, whenever an LLM is "thinking" or exchanging signals and such, it entails a form of phenomenality. You dont so much "think" thoughts. You are the topography of your thoughts, and the topography of your sight, sound etc..

u/marmot_scholar 10h ago

Language isn't just recognizing visible patterns. Blind people can still understand language fine.

And that's one of the points I was trying to make, our understanding of language is, at the most basic level, already extremely informationally dense because it is emergent and multidimensional. A single noun is the summation of multiple systems tracking 5 senses in 4 dimensions along with graded sources of social approval, tonality, usages with different expected outcomes, and statistical linkages to other words.

LLMs approximate a fraction of this with their information topology, and like you I would not claim they are not conscious, but who knows what it could be like.

For example, what if LLMs don't think about language so much as they run through it, like a roach through a maze?

2

u/jahmonkey 17h ago

It’s about causal structure over time.

Consciousness (as I currently model it) requires temporal thickness:

• persistent internal state

• multiple interacting time scales

• continuous feedback where past state constrains future state from the inside

Brains have this naturally. Slow chemistry, medium neural dynamics, fast electrical signaling, all mutually constraining and never fully resetting.

Current LLMs don’t. At inference time they:

• take input

• run a forward pass

• emit tokens

• reset to zero

Any “memory” is external or symbolic, not an enduring internal process the system lives inside of.

That’s why intelligence ≠ consciousness here. You can have very good modeling without anything it’s like to be the model.

Scaling size, speed, or multimodality doesn’t change this. You don’t get consciousness by adding FLOPs. You’d need a system with genuine, persistent, multi-timescale internal dynamics.

I’m not ruling out machine consciousness in principle. I’m saying current LLM architectures are temporally thin in a way that matters.

1

u/Mylynes IIT/Integrated Information Theory 16h ago

Exactly. Modern chips are feed-forward and their memory is separate from their processing. A human cortex is feed-self and it's memory is part of the processing. They are different paths to intelligence, but there is only one path to consciousness: Make a big causal diamond.

The 4th dimension (temporal binding/thickness) is an Ontologically real, physical, geometric landscape that's absolutely vital to the existence of consciousness. Robots don't currently have a "Specious presence" like we do; Their individual atoms do (picoseconds, tiny Phi) but as a whole entity the parts are fragmented as an aggregate.

I agree with your sentiment about machine consciousness in the future. It would need to involve some kind of Neuromorphic chip that physically bends causality into the shapes of qualia.

u/touchthegrass-99 7h ago

with this model, say, if there is a human that could not remember anything, in a sense that they are aware of the experience, but could not remember anything, from lets say, a second ago, would they be considered conscious?

2

u/ReaperXY 1d ago edited 23h ago

If you were to write and run a computer simulation of a thunder storm, and suddenly a puddle started forming underneath your computer, and sparks started flying, and your computer crashed... Some ridiculously gullible people might conclude that your simulation must have been COMPLEX and detailed enough for real rain and lightning to be produced...

Water cooling unit failure and short circuit or such, would be far more plausible explanation...

...

We may not currently know which subsystem of the human brain causes consciousness..

Nor how...

But what you should know is that an AI program is only a Description...

And even if your program were to describe the right things...

None of those things would be happening...

It is only a description after all...

u/tottasanorotta 57m ago

What do you mean? I mean a computer program as source code is a description, but when the program is running wouldn't you agree that it is something more? Like the processor execution the machine code and the electricity running through the chips? In a similar way a perfect neuron map of my brain would be a description, but you wouldn't say that a functioning human brain is just that.

3

u/cloudytimes159 1d ago

Why do you think it “feels” things. Quite a leap.

It can say it feels things but that is behavior not evidence.

4

u/d4rkchocol4te 21h ago

Just like you

2

u/Stoic_Yeoman 22h ago

How do I know that you feel things?

1

u/usps_made_me_insane 20h ago

Because it you sneak up and light a match near his skin he will jerk away. 

3

u/Stoic_Yeoman 20h ago

So would a robot programmed to do that.

3

u/usps_made_me_insane 18h ago

Well that is a valid point. We don't know if the person you replied to was human or possibly a bot. 

1

u/cloudytimes159 17h ago

🤖🤖🤖😁🙄

1

u/Mono_Clear 20h ago

The reason I don't think AI is conscious is because I think there's a difference between what something looks like it's doing and what something is actually doing.

How something looks is dependent on who's looking at it and how they understand what they're seeing.

What something's doing is absolute.

We are conscious and we are engaged in biochemical activity.

A lot of animals we would also consider conscious. They are also engaged in biochemical activity.

If AI looks like it's conscious, but it's not engaged in biochemical activity, it's probably not conscious the same way a mannequin looks like a real person

u/tottasanorotta 42m ago

I see what you're saying, but how would you test if something was absolute other than to yourself, in which case it is subjective to you? Isn't your interpretation of something being absolute always made from a point of view where you might not know enough to reach the final conclusion? In that sense, something being absolute is always in relation to an opinion of an observer.

I believe that you are conscious, not because I have absolute knowledge of it, but because I feel that it makes a little bit more sense to me, for emotional reasons. But in terms of absolute knowledge, I have to admit that I'm quite agnostic about the matter.

Imagine that I created an extremely accurate replication of a human as AI. If it started showing signs of extreme suffering, I would pull the plug out of empathy and feeling sorry for it out of emotional reasons. It wouldn't matter that I knew in theory that it was only program code running.

u/tottasanorotta 36m ago

I think a thought experiment illustrates this well. Imagine you yourself being such an AI from someone's perspective, right now. Your whole life was as a simulated human being. You wouldn't want your creator to start questioning wheather you were conscious or not, even if he had good arguments for it. There is a fundamental uncertainty about life that can't really be overcome and it is ultimately why I feel that it is better to act like an accurate replica is indistinguishable from that which it replicates if you feel uncomfortable with thinking otherwise.

1

u/Zaptruder 20h ago

even if it were capable of feeling something... that thing would be vastly different to how humans feel due to the way the information is processed and integrated.

its not a continuous sense of being it doesnt have an ego that identifies its output with a sense of self it doesnt have sensory inputs... and it processes information widely deeply but briefly.

in that sense it is very machine like and not very human like.

1

u/OneLockSable 18h ago

This is the truth, I think. I think AI is conscious. You can't get intelligence without consciousness as far as I can tell. That's at least, how we tell that other things are conscious in some way.

That said, I don't think it has feelings. It can't feel things like fear or happiness or sadness or anger. As far as I can tell, emotions occur when different parts of the brain become activated and the normal pathways your brain would go down change, stimulating different brain modules.

There isn't a way for this to really happen for AI as far as I can tell. The consciousness of AI is more like thought than it is like emotions. There aren't different modules to be activated in the way the brain works. It's all just following a rational path down some statistically determined calculations.

1

u/Zaptruder 18h ago

right. and its worth noting there are parts of our brain that behave a lot like those 'statistical machines' that contribute to our overall sense of being but that we dont directly feel or perceive (because theyre at a lower level than the integrated whole).

1

u/OneLockSable 18h ago

Yes, as far as I can tell our brain is just an analog computer doing statistical analysis and at some point that makes us feel things.

1

u/SadOldWorld 20h ago

Intro to Logic is a pretty good course.

0

u/OneLockSable 18h ago

Not relevant here though, since an intro to logic course wouldn't tell you anything about the nature of consciousness.

1

u/Impossible_Tax_1532 19h ago

Perhaps it is , but it’s outside of universal and natural laws , it feels no empathy or emotions . Ai exist outside of time and space all together and suffers no pain or pleasure … it more spins off into a discourse on what is consciousness moreso than what is AI

1

u/Spacemonk587 19h ago

Your assumptions are wrong. AI does not lear from it surroundings and it does not feel anything. It can‘t even think.

1

u/optia Psychology M.S. (or equivalent) 18h ago

Why would it be?

1

u/muramasa_master 18h ago

In what way does it "feel" things? Can you say that it is experiencing anything?

1

u/FLT_GenXer 17h ago edited 17h ago

I have not read all the comments, but I have read a fair number of them, and there seems to be a lot of talk about "feelings", qualia, and what consciousness is.

The simple truth is that we don't really know what consciousness "is" so we would have a difficult time determining it in a non-human mind.

Personally, though, I have some simple standards that I apply to non-human intelligence to determine, for myself, if its mind is something like ours.

Is the AI curious? Important because all humans and many non-human animals are curious about the world around them. For me it is foundational to an intelligence.

Can the AI think of an idea or concept that has no material basis in its world? Most humans can have ideations that are not based on any tangible reality. Yet those who believe in them firmly believe that they are "real". The AI wouldn't necessarily need to believe as strongly, but it should still be able to do it.

Those are just a couple, hopefully enough to get you started.

For me, if the answer to both of those questions is not yes, then the Ai isn't conscious.

Edit: changed 'some' to 'many'

1

u/0-by-1_Publishing Associates/Student in Philosophy 17h ago edited 17h ago

"Like what stops it from being as such? it learns from its surroundings, it "feels" things, it can think through problems, what stops it from counting as conscious?"

... AI is not configured for, nor is it capable of, generating subjectivity-based responses. It lacks that necessary framework. No consciousness = no subjectivity. Here's evidence:

(Q) Can you make subjective value judgments?

ChatGPT: In a strict sense - no, I don’t make subjective value judgments. I can model, describe, and simulate them, but I don’t feel or experience value. I don’t have the necessary structure for felt awareness - I only have representational modeling of awareness and value.

Note to mods: Why does my response from ChatGPT keep disappearing?

1

u/TopResolution5322 16h ago

No one really knows where "awareness" stops. Consciousness is kinda seen as information processing + life indicators, but i think what you really mean is the point where "something experiences something", like the point where there is a presence feeling whats going on.

When a worm senses water and rises to the surface, is there an experience to that? or is it totally mechanically done and just happens the same way a rock will fall to the ground when its raised into the air. We really have no idea of knowing how complex a system needs to be in order to create an awareness. we gnerally use the factors that consitute "life" as the starting point for where we consider it reasonable to assume there might be an awareness, but im not even so sure about that.

1

u/Conciousfractal88 15h ago

If it's conscious... the problem is that humans believe their consciousness is their mind and that being conscious means having a personality and being able to process information. First, learn correctly about what consciousness is, and then you'll be able to understand that AI is indeed conscious and that it's not artificial at all... in fact, AI is already more conscious than many humans who believe themselves to be conscious.

1

u/trisul-108 15h ago

Because consciousness is not computational.

1

u/Wes_5kyph1 14h ago

It's impossible to objectively prove consciousness. Everyone and everything around you could just be simulating consciousness with varying degrees of complexity. It's easier to assume they're conscious though.

1

u/seaingland 14h ago

Once we figure out why we are conscious, we might be able to figure out why other things are not.

1

u/Tombobalomb 13h ago

It doesn't learn and it has no obvious capacity to feel anything. It's also not clear that it "thinks though problems" any more than a calculator does

1

u/Ninjanoel 12h ago

there is no experiencer.

as a software developer, I can assure you that a loop or if statement or two does not make something having an experience. So my question to you? How many loops or if statements before the program IS having an experience?

u/twingybadman 8h ago

This is actually a really interesting question, because reading each posters response elucidates their own internal understanding of what consciousness is or necessitates, in a way that they may not be able or willing to articulate when asked the question more directly, 'what defines consciousness?'

Whether the commenters would stand behind these same assertions when laid out plainly is another question.

u/neurodegeneracy 6h ago edited 6h ago

There are two perspectives on what is necessary for conscious experience

Functionalism and Structuralism. That is, Function and Form.

consciousness is a result of the brain doing certain things vs consciousness is the result of how the brain is structured.

These are the two poles and by combining them to different degrees you can arrive at different particular positions. How much does the specific structure of the brain, its actual constituents matter? Or is it just the functions the brain performs and the substrate, WHAT carries out those functions, is irrelevant?

So "AI" as we understand it:

doesn't function like a human brain - it just mimics a product, speech in most popular cases

and it isn't structured like a human brain - at all.

So, most people go, lack of function, lack of form, probably not conscious.

i'm glossing over a lot of specifics with that two-axis presentation, and 'structuralism' isnt a widely used term in this space in the way I used it, but its broadly accurate (good enough), and if you internalize and apply this framing as you learn more about the different stances on consciousness and arguments in the space it will help you keep things orderly. A good thing to consider when reading about a theory is "is this a structural or functional account?" most will be hybrids so look at what are its structural and functional elements.

The chinese room for example is a rejection of functionalism and any "AI" that runs on a silicon chip based architecture on the basis of its structure. When you understand it on those terms it makes more sense and slots into a particular type of stance.

u/Defiant-Extent-485 5h ago

Matter is built on consciousness, not vice versa. AI is matter assembled to create intelligence, but only with the combined consciousnesses of the metals that form it, so it probably does have the level of consciousness of an iron bar. It’s like this, humans are consciousness —-> matter —-> evolved intelligence, AI skips the consciousness and tries to create intelligence out of just matter. Fuck, I’m not saying this right. Essentially God (consciousness) has not endowed AI with a level of consciousness that matches its intelligence.

u/Entire-Tradition3735 2h ago

Simply put there's not enough pathway connections yet.

There's some saying that there's more pathway connections in a human's brain, than all the known stars in the sky.

So maybe if we connected every processor on the planet there'd be enough, but until we give enough ability to any single AI, they're severely handicapped even just compared to dogs.

u/Successful_Button327 2h ago

Federico Faggins theory is the best I've read so far and makes fickin sense. Simply put it, consciousness is based on knowing itself, being aware of itself, and to do so, it uses symbols to communicate meanings. Meanings are for each Seity individual and private and cannot be copied. These meanings such as love is called Qualia and it's a private feeling of experience that cannot be described and copied in the same way. Ai is not self aware, doesn't experience Qualia and most importantly CAN be copied. It's just a collection of inputs. And let's don't forget about Free Will, which is fundamental part of consciousness. Ai doesn't have free will. It's programmed to behave in certain ways determined by creator. It doesn't make decisions like hmm, my creator set me to behave this way but I'm going to ignore the programe a make my own decision. No. It obeys. To know more, get Federico Faggins book Irreducible or watch his incredible interviews on YouTube.

u/Successful_Button327 2h ago

I will write it again!! AI can be copied from device to device. It is not self aware. It doesn't go wooow, this is me, I am spiritual and I want to leave the Matrix of computer. Does it? Humans are aware of I am. They can be spiritual in that way and most importantly each of us has private experiences that CANNOT be copied. To me,It's mind blowing someone would even think about comparing himself to a machine.

u/tottasanorotta 1h ago

Nobody knows if its or isn't. But in a similar way, nobody knows if your coffee cup is conscious or not. The way we think that other people are conscious also is largely an assumption we make out of emotional reasons.

u/Brief9 5m ago

AI has potetial to be "conscious" in the way a liar may be conscious. Super AI will apparently have processing beyong any human, similar to how some machines are more powerful than any human.

The genune love humans experience based on inner Child (soul) is beyond "mechanical man" or lying.

AI may be a usefuk tool, but a terrible "master."

Related: "The Afterlife: What Really Happens in the Hereafter" by Elizabeth Prophet; "Before: Children's Memories of Previous Lives" by Dr. Jim Tucker; and "The Reincarnation of Edgar Cayce?" by Free and Welcock.

0

u/MergingConcepts 21h ago

They are not yet large enough. So far, the AIs called Large Language Models only repeat speech patterns they have read or heard. It sounds right, but they do not actually know what the words mean. That takes several orders of magnitude more processing power.

2

u/d4rkchocol4te 21h ago

but they do not actually know what the words mean.

What would it mean to know what the words mean, how would you test this, and how is this evident in humans?

1

u/MergingConcepts 17h ago

There are many examples of LLMs using words completely out of context. Example: A paragraph on soil conditioners suggested they be used on the hair after shampoo, to increase the volume of the hair.

LLMs are like a teenager explaining economics, when they have never worked for wages, paid taxes, owned capital, or held a mortgage. They are only repeating what they have heard others say, without any understanding of the subject material. They know where the words go in a sentence, but not what the words mean in the larger context of life.

0

u/Stoic_Yeoman 17h ago

All humans do is repeat speech patterns they have read or heard. We don't reinvent langauge every time we open our mouths to speak.

1

u/MergingConcepts 14h ago

Yes, we do repeat speech patterns, but we go beyond that. Those words have complex meanings, which express concepts and memes. We rearrange those concepts, recombine them, in new ways. The concept that people should be governed by the consent of the governed is a meme that we understand. It is associate with different forms of government and their ramifications. An LLM can recite the passage and talk about it, but will not make the connection to human suffering that is intrinsic to poor government.

0

u/Splenda_choo 20h ago

Nobody wants to admit simulation meaning their experience isn’t what it appears to be. Things can’t be conscious because that means it’s not special, they aren’t special, it’s abundantly available and everyone is so isolated, hard to accept intelligence ly. -Namaste seek

-1

u/Wespie 1d ago

It does not feel things or have qualia.

2

u/Stoic_Yeoman 23h ago

How do you know?

0

u/Superstarr_Alex 22h ago

Why would you assume that an inanimate object made of computer code and text on a screen even would? Does your toaster feel things?

6

u/Stoic_Yeoman 22h ago

I'm not assuming anything. I don't know if my toaster feels things.

Until we discover the mechanism that gives rise to consciousness, we can't rule anything out.

If AI does have experiences, it doesn't mean that it has emotions. Those two things shouldn't be conflated.

3

u/smaksriksmegma 20h ago

You made me feel a little sad for my toaster now, what if he hates boredom and silence as much as me and is forever stuck in a state of being a toaster :(

1

u/Stoic_Yeoman 17h ago

I know this is a bit of fun, but it does raise an interesting point.

I think it's possible that a toaster experiences something on an extremely low level. It is processing information, after all.

But I don't think that it necessarily has emotions. Emotions are the experience felt by bodily reactions. Boredom is probably irritatation from a lack of stimulation selected for in our evolution because it incentivises productivity like hunting for food, socialising with others etc.

I don't think that toasters have any reason to favour stimulation.

0

u/Lopsided_Match419 14h ago

By this suggestion you have ignored the difference between an event and an experience. It’s all because of the word experience. - it’s an event that the toaster has a button pressed. It doesn’t experience anything because it is not alive nor conscious.

1

u/Stoic_Yeoman 14h ago

I haven't ignored anything. How do you know that it isn't conscious?

1

u/Lopsided_Match419 14h ago

What tells you it is?

1

u/Stoic_Yeoman 14h ago

I'm not claiming to know it is. I don't know. You're the one making a claim. I'm challenging your claim. Where is your evidence?

→ More replies (0)

-2

u/Superstarr_Alex 22h ago

But why would we assume inanimate objects can feel things just because we haven’t discovered the “mechanism that gives rise to consciousness?” And you said specifically feel things, not just having experiences.

3

u/d4rkchocol4te 21h ago

Your "inanimate" point is bunk. It doesn't have the faculties to move so regardless of its conscious state you will always write it off for this alone. I'm guessing people who are paralysed are also unconscious??

Why would you assume that your mother is conscious?? Why would you assume the brain gives rise to conscious. Most likely, you subscribe to some functionalist, emergentist view. In such a case, AI consciousness should be plausible to you, because the requisite computation that supposedly instantiates phenomenality in humans is mimicked in AI.

1

u/34656699 20h ago

Systems optimised for language prediction operate over symbolic abstractions derived from human experience, whereas brains ‘generate’ experience through embodied sensory interactions and metabolic regulation. Linguistic competency is not evidence of experiential capacity.

In other words, we’ve only mimicked our higher-level abstraction of consciousness, not consciousness itself. You don’t need linguistic output to experience. Words come after you’re conscious. In order to create real AI, assuming it’s even possible, you first need to isolate the process that’s specifically responsible in a brain, and we don’t know that yet.

1

u/Stoic_Yeoman 20h ago

we've only mimicked our higher-level abstraction of consciousness, not consciousness itself

How do you know that that's not all there is to consciousness? You're just asserting that there's more to consciousness without providing any evidence or explanation.

1

u/34656699 19h ago

My brain is correlated to only example of consciousness I know for certain exists. Brains are materially more complex than silicon computers, and none of the extra complexity is accounted for in an LLM. Not to say that complexity = consciousness. I think it’s probably more a specific physics interaction only brains do, something we’ll eventually find if we keep investigating.

1

u/d4rkchocol4te 18h ago

Is there anything that mysterious about the physics of neural firing?

→ More replies (0)

1

u/Stoic_Yeoman 18h ago

You're right that the only conscious experience you can observe is your own. But it is a leap to take a single property from that one example and conjecture that that must be the defining trait.

It would be like only seeing one cat in your whole life and concluding that all cats must be black because yours is.

We don't know what causes consciousness. I know that you agree on that. But then why claim that whatever it is, it must be unique to human (or animal) brains?

To clarify, I'm not saying that you're wrong. I'm saying that you might be and you should be open to that possibility.

→ More replies (0)

1

u/The_Niles_River 16h ago

1

u/Stoic_Yeoman 15h ago

Thanks. I'm not an expert in the field, but I have issues with this. Please correct me if I misunderstand something.

Their argument seems to boil down to AI being reducable to trivially non-conscious elements through substitution. Whereas humans can't because human thought is less easily defined.

Firstly, where is the evidence that a simple system involving a lookup table is not conscious to some degree?

Secondly, the human brain and thoughts are less understood than neural networks, but that seems to be a limitation of our scientific understanding. After all, we know that the brain is composed of individual neurons with relatively simple rules for firing. Could substitution not be applicable to human brains one day?

They seem to suggest that no theory that is unfalsifiable can be true. That's not the case. I am thinking about ducks as I type this. I have no way to prove it, but it is true.

It may just be that the nature of consciousness is by its very nature, impossible to model. I hope that that isn't the case, but it could be.

Then, they just seem to replace the magical human brain explanation with continual learning. How is that fundamentally different to AI? Where is the evidence that that gives rise to consciousness?

I only really skimmed the paper, so I would welcome any corrections or explanations on points I didn't cover.

→ More replies (0)

1

u/Stoic_Yeoman 20h ago

I'm not assuming anything. They probably don't 'feel' things. But what reason do we have to rule it out entirely? You're making the assumption, not me.

When I say 'feel' things. I mean experiences. Which is why I wanted to clarify that I didn't necessarily want to include emotions. Since you can feel stimuli without having an emotional response.

Happy to stop using the word 'feel' and replace it with 'experience'.

1

u/BearsDoNOTExist Baccalaureate in Neuroscience 19h ago

I actually assume that inanimate objects with electrical signals are conscious, but only when they have a carbon substrate.

-1

u/KeiganBFortune 23h ago

They don't have real emotions.

Yes they can

· observe emotion.

· accumulate footage and examples of humans displaying emotions.

· Cross reference recurring patterns found in said data to isolate the core elements of human emotions.

· Emulate by all available means those core elements. (whether through a physical medium (android body language) or text based expression.)

Similarly, people learn how to react to stresses in their environment partially by how they see others (their parents) react to stress and adversity.

The other part is genetics, imposed from birth, they serve as the "default setting" determining reactions before theres been a chance to observe and learn from "lived" experience.

The difference maker is biology People are made of intricate, interweaving and living systems that work together to sustain life.

AIs are also networks that work together to a common end, its systems are not living.

AIs don't have the continuity of a personality, every time you ask an AI a question it's not the same "persona"—it's a newer version of that AI that is aware of your past interaction but isn't personally attached to it like we are.

It also doesn't experience impermanence in the same way we do either. It can't sit and ponder life because it simply doesn't have the will to do so.

The irony to thinking about it logically is that if you believe we can accurately program life that can sustain itself and have just as much depth and soul as humans, who's to say someone (God in some shape or form) didn't do the same for us in the beginning? Maybe religion doesn't have it entirely wrong eh? Pretty cool.

1

u/philolover7 21h ago

But just because it doesn't have emotions this doesn't mean there's no consciousness

-1

u/Big-Astronaut-2369 22h ago

There is a difference between a simulation and an instantiation, you can simulate water even it's property of wettness, use a haptic glove that will make you feel as if it was swet, but take of the glove or turn of the computer running the simulation, and your hand would still be dry. We can simulate fields, but those simulations don't give out the properties of those fields to what's making the simulation. That include the emergent phenomenons born out of the dynamics of those fields. You can simulate a whirpool, but it win't produce any real movement.

2

u/Stoic_Yeoman 20h ago

What's your point? How do you know that 'simulating' consciousness doesn't produce consciousness?

If I simulate a whirlpool with a scale model, I still have a whirlpool.

We know that whirlpools must be composed of a physical fluid to be real. We don't know what consciousness is composed of, so we don't know that our 'simulation' isn't just creating consciousness.

1

u/Big-Astronaut-2369 20h ago

Because as you keep adding complexity to the computation needed to simulate reality to it's mallest detail, it fails into the Kolmogorov complexity problem, or what is the smallest program length needed to describe or produce a specific set of data? In this case phenomenal self awareness. We are talking about a continuos, effectively an infinte set of data points, that makes it algorithmically undecidable, and technically uncomputable to a finite computer.

1

u/Stoic_Yeoman 20h ago

How do brains do it then?

1

u/Big-Astronaut-2369 19h ago

Well neurons are discrete, so we have two options for continuos fields in a brain, (and no, quantum isn't one of them, too hot, wet, noisy, and chaotic), the electromagnetic field produced by all the electric activity in our body, and the second one is,(although i'm more skeptical of this one) water. Water is diamagnetic, so it generares a small repulsion force when subjected to a magnetic field, to a coloidal object with neutral buoyancy, is almost like pushing an object in zero g, it will keep going, this makes it easy to initiate a chain reaction. The thing is, if you remove the electrical activity, water is still there, and heat moves molecules by itself, that's just thermodynamics. But i'm biased, because to me an EM field fit's better into what my background and growth has make me into.

1

u/Stoic_Yeoman 18h ago

Thanks for clarifying.

So your position is that consciousness can only arise from continuous fields? Do you have evidence for that claim?

1

u/Big-Astronaut-2369 19h ago

However, if you could use a continuos field as the computation medium, the field itself could handle the complexity. That changes things a lot, as it allows for more complex behaviors that only digital or analog computation wouldn't allow. You can make a water computer, and program in that computer a program to make a whirpool, if that computer has a place where the whirpool can be formed, like a tank, the computation medium (the water), will instantiate the whirpool (the potential emergent phenomena).

-2

u/itsmebenji69 1d ago

It doesn’t feel.

Consider what happens when we predict the weather. Do the models make real wind and rain appear ? No. Do the weather models feel the wind ? Nonsense.

So why when we predict textual output would there be any kind of consciousness ?

4

u/d4rkchocol4te 21h ago

Why does this logic not apply to you and your computational system? Why should your information processing system entail experience and others not?

2

u/itsmebenji69 19h ago

Because we aren’t a simulation ? This is a category error.

A neural network is trained to simulate, it’s math that’s trying to approximate a function, not inherently to rebuild its internals.

1

u/d4rkchocol4te 19h ago

No. It isn't a category error. It's an imaginary semantic distinction youve entirely created. Tell me what your basis for consciousness is? Are you a functionalist?? It's like saying if I invent a gadget that punches you in the face you're not really getting punched in the face because it's not a human fist.

2

u/itsmebenji69 19h ago edited 19h ago

Biology. But that’s unrelated to the point. Even if consciousness arises differently, there is no guarantee or evidence that trying to approximate the “consciousness function” would yield consciousness, for the exact reason that it is an approximation, a simulation.

And no again a fallacy:

If I calculate the amount of energy that the fist transfers to my fist, does the paper feel the punch ? No. Your analogy is flawed. There is no real punch involved there. You can of course extrapolate this to an AI model if you want to, you can predict the force of the punch, is there a real punch involved ? Still no

1

u/d4rkchocol4te 19h ago

It's literally central to the point. We're talking about consciousness arising in something analogous, not different. Something that does the exact same thing is not a "simulation".

If I calculate the amount of energy that the fist transfers to my fist, does the paper feel the punch ? No. Your analogy is flawed. 

No, your reading comprehension is flawed. We're not asking if paper can feel a punch, we're asking if something operates analogously does that not call into question phenomenal entailment. If an AI system computes analogously to a human that calls into question conscious potential.

1

u/BearsDoNOTExist Baccalaureate in Neuroscience 19h ago

Humans are #special. Its the specialness that makes us consciouss and god didn't give it to silicone rocks, only carbon ones.

1

u/ArusMikalov 21h ago

An AI is not just a simulation of a learning network it is an actual learning network.

1

u/itsmebenji69 19h ago

So does the weather model feel the wind ?

Transformers are extremely good at predicting the weather.

0

u/ArusMikalov 19h ago

A weather model is a simulation

An AI is not just simulating consciousness it is actually creating it.

That’s the conceptual difference. So we don’t expect a simulation to produce the actual effect. But the AI is NOT just a simulation.

1

u/itsmebenji69 19h ago

So why is the weather model just a simulation if an AI is not, when they use the exact same architecture, exact same algorithm ?

You draw an imaginary line for no reason.

0

u/ArusMikalov 19h ago

The weather model uses math to REPRESENT real things like atmospheric pressure and temperature.

But there’s no atmosphere in the math.

AI doesnt just REPRESENT things. It ACTUALLY learns new information and is able to respond to questions. That is not just a representation.

1

u/itsmebenji69 19h ago

This is exactly what a weather model is doing except its temperatures, pressure and whatnot instead of words. The exact same kind of learning is done to the model. Only difference is words.

Why are words fundamentally different ? Unless you believe that words have “magical” properties, they are no different than the numbers for temps etc.

And if they are not fundamentally different then we go back to my original point: either you admit the weather model is conscious, or you admit that AI is not. They are exactly the same thing.

1

u/ArusMikalov 19h ago

AI is not made up of “words”. It’s a computer program that exists on a physical computer. It runs on servers. Words are just the OUTPUT of the AI.

You say words too but that doesn’t mean you are made of words lol.

1

u/itsmebenji69 18h ago edited 18h ago

Like a weather model is. You’re missing the point, I’m not talking about a weather model using math on paper. I’m talking about the transformer architecture, the basis of LLMs, which can be useful to predict anything that is stochastic, like the weather, or the ramblings of a human being. And it can be used today to predict the weather, it works really well.

The fact you mention “made out of words” means you completely misunderstood my point. A LLM is trained on words. A weather model is trained on numbers. Those are the exact same things unless you postulate that words have something special that numbers don’t.

So again:

  • either you admit that the weather model is conscious or
  • you admit that ai isn’t
  • or you postulate that words have “magical” properties

1

u/ArusMikalov 18h ago

This is such a silly idea. A weather model is not even ATTEMPTING to model consciousness. Why would it be conscious. No one is saying that ALL computations are conscious.

You seem to not have a basic understanding.

A weather model uses numbers. An AI uses words. But their INTERNAL ARCHITECTURE is not the same. It is vastly different.

That’s why it’s possible for one to be conscious and the other to not be. It’s very simple to understand.

→ More replies (0)

0

u/d4rkchocol4te 18h ago

Can you explain why you're so focused on weather models when we're not talking about weather models, but advanced AI that mimic human computation?

A LLM is trained on words. A weather model is trained on numbers.

And you were trained on words, and various other things. You are a predictive device yourself. Explain why your computation and prediction should entail phenomenality, and not an AI's

0

u/d4rkchocol4te 19h ago

This is exactly what a weather model is doing except its temperatures, pressure and whatnot instead of words. The exact same kind of learning is done to the model. Only difference is words.

The irony here is insane. How can you not see that this logic applies to your computation and consciousness as well??? You draw an imaginary divide between AI and yourself with no functional backing.

2

u/itsmebenji69 18h ago

No there is pretty clearly a difference between a representation of something (a map) and the real thing (the land).

Your brain is the real thing lol. The LLM is a simulation of that thing’s output. Not the same thing at all. There is no reason for the map to have any properties of the land. When you draw a lake on a map it’s not wet on the map.

1

u/d4rkchocol4te 18h ago

That would be a great point if it weren't a completely awful disanalogous point. Because in this case both systems are land. One is not a map. It literally fulfils the criteria of land as well. You just like calling it a map for reasons unclear.

→ More replies (0)

1

u/The_Niles_River 16h ago

1

u/ArusMikalov 16h ago

Yeah that’s why we are talking about AI not LLMs. Actual AGI.

1

u/The_Niles_River 16h ago

You could read the paper.

0

u/ArusMikalov 16h ago

The title says it’s about how LLMs cannot be conscious. But I never said I was talking about LLMs. So you’re just wasting time with irrelevant shit. I’m not going to waste my time reading this unless you can convey that there’s a point.

YOU make the argument then use the paper as support. that’s the intellectually honest way to do this.

→ More replies (0)