What is your brain doing if not statistical text prediction?
The show Westworld portrayed it pretty good. The idea of jumping from text prediction to conscience doesn’t seem that unlikely. It’s basically text prediction on a loop with some exterior inputs to interact.
Did you actually read the article or just googled until you find something that reinforced your prestablished opinion to use as a weapon against a person that you don’t even know?
I will actually read it. Probably the only one of us two who would do that.
If it’s convincing I may change my mind. I’m not a radical, like many other people are, and my opinions are subject to change.
Funny to me how defensive you got so quick, accusing of not reading the linked paper before even reading it yourself.
The reason OP was so rude is that your very premise of “what is the brain doing if not statistical text prediction” is completely wrong and you don’t even consider it could be. You cite a TV show as a source of how it might be. Your concept of what artificial intelligence is comes from media and not science, and is not founded in reality.
The brain uses words to describe thoughts, the words are not actually the thoughts themselves.
Think about small children who haven’t learned language yet, do those brains still do “stastical text prediction” despite not having words to predict?
What about dogs and cats and other “less intelligent” creatures, they don’t use any words but we still can teach them to understand ideas. You don’t need to utter a single word, not even a sound, to train a dog to sit. Are they doing “statistical text prediction” ?
Read other replies I gave on your same subject. I don’t want to repeat myself.
But words DO define thoughts, and I gave several examples. Some of them with kids. Precisely in kids you can see how language precedes actual thoughts. I will repeat myself a little here but you can clearly see how kids repeat a lot phrases that they just dont understand, just because their beautiful plastic brains heard the same phrase in the same context.
Dogs and cats are not proven to be conscious as a human being is. Precisely due the lack of an articulate language. Or maybe not just language but articulated thoughts. I think there may be a trend to humanize animals, mostly to give them more rights (even I think that a dog doesn’t need to have a
intelligent consciousness for it to be bad to hit a dog), but I’m highly doubtful that dogs could develop a chain of thoughts that affects itself without external inputs, that seems a pretty important part of the consciousness experience.
The article you link is highly irrelevant (did you read it? Because I am also accusing you of not reading it as just being result of a quick google to try point your point using a fallacy of authority). The fact that spoken words are created by the brain (duh! Obviously, I don’t even know why the how the brain creates an articulated spoken word is even relevant here) does not imply that the brain does not also take form due to the words that it learns.
Giving an easier to understand example. For a classical printing press to print books, the words of those books needed to be loaded before in the press. And the press will only be able to print the letters that had been loaded into it.
the user I replied not also had read the article but they kindly summarize it to me. I will still read it. But its arguments on the impossibility of current LLM architectures to create consciousness are actually pretty good, and had actually put me on the way of being convinced of that. At least by the limitations spoken by the article.
Your analogy to mechanical systems are exactly where the breakdown to comparison with the human brain occurs, our brains are not like that, we don’t only have the blocks of text loaded into us, sure we only learn what we get exposed to but that doesn’t mean we can’t think of things we haven’t learned about.
The article I linked talks about the separation between the formation of thoughts and those thoughts being translated into words for linguistics.
The fact that you “don’t even know why the how the brain creates an articulated spoken word is even relevant here” speaks volumes to how much you understand the human brain, particularly in the context of artificial intelligence actually understanding the words it generates and the implications of thoughts behind the words and not just guessing which word comes next based on other words, the meanings of which are irrelevant.
I can listen to a song long enough to learn the words, that doesn’t mean I know what the song is about.
but that doesn’t mean we can’t think of things we haven’t learned about.
Can you think of a colour have you never seen? Could you imagine the colour green if you had never seen it?
The creative process is more modification than creation. taking some inputs, mixing them with other inputs and having an output that has parts of all out inputs, does it sound familiar? But without those input seems impossible to create an output.
And thus the importance of language in an actual intelligent consciousness. Without language the brain could only do direct modifications of the natural inputs, of external inputs. But with language the brain can take an external input, then transform it into a “language output” and immediately take that “language output” and read it as an input, process it, and go on. I think that’s the core concept that makes humans different from any other species, this middle thing that we can use to dialogue with ourselves and push our minds further. Not every human may have a constant inner monologue, but every human is capable to talking to themself, and will probably do when making a decision. Without language (language could take many forms, not just spoken language, but the more complex feels like it would be the better) I don’t know how this self influence process could take place.
It’s a basic argument of generative complexity, I found the article some years ago while trying to find an earlier one (I don’t think by the same author) that argued along the same complexity lines, essentially saying that if we worked like AI folks think we do we’d need to be so and so much trillion parameters and our brains would be the size of planets. That article talked about the need for context switching in generating (we don’t have access to our cooking skills while playing sportsball), this article talks about the necessity to be able to learn how to learn. Not just at the “adjust learning rate” level, but mechanisms that change the resulting coding, thereby creating different such contexts, or at least that’s where I see the connection between those two. In essence: To get to AGI we need AIs which can develop their own topology.
As to “rudeness”: Make sure to never visit the Netherlands. Usually how this goes is that I link the article and the AI faithful I pointed it out to goes on a denial spree… because if they a) are actually into the topic, not just bystanders and b) did not have some psychological need to believe (including “my retirement savings are in AI stock”) they c) would’ve come across the general argument themselves during their technological research. Or came up with it themselves, I’ve also seen examples of that: If you have a good intuition about complexity (and many programmers do) it’s not unlikely a shower thought to have. Not as fleshed out as in the article, of course.
That seems a very reasonable approach on the impossibility to achieve AGI with current models…
The first concept I was already kind of thinking about. Current LLM are incredibly inefficient. And it seems to be some theoretical barrier in efficiency that no model has been able to surpass. Giving that same answer that with the current model they would probably need to have trillions of parameters just to stop hallucinating. Not to say that to give them the ability to do more things that just answering question. As this supposedly AGI, even if only worked with word, it would need to be able to do more “types of conversations” that just being the answerer in a question-answer dialog.
But I had not thought of the need of repurpose the same are of the brain (biological or artificial) for doing different task on the go, if I have understood correctly. And it seems pretty clear that current models are unable to do that.
Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.
Getting a little poetical. I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.
Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.
Does a dog have the Buddha nature?
…meaning to say: Just because you happen to have the habit of identifying your consciousness with language (that’s TBH where the “stuck in your head” thing came from) doesn’t mean that language is necessary, or even a component of, consciousness, instead of merely an object of consciousness. And neither is consciousness necessary to do many things, e.g. I’m perfectly able to stop at a pedestrian light while lost in thought.
I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.
What Descartes actually was getting at is “I can’t doubt that I doubt, therefore, at least my doubt exists”. He had a bit of an existential crisis. Unsolicited Advice has a video about it.
But when I think of how to define a consciousness and divert it from instinct or reactiveness (like stopping at a red light). I think that something that makes a conscience a conscience must be that a conscience is able to modify itself without external influence.
A dog may be able to fully react and learn how to react with the exterior. But can it modify itself the way human brain can?
A human being can sit alone in a room and start processing information by itself in a loop and completely change that flux of information onto something different, even changing the brain in the process.
For this to happen I think some form of language, some form of “speak to yourself” is needed. Some way for the brain to generate an output that can be immediately be taken as input.
At this point of course this is far more philosophical than technical. And maybe even semantics of “what is a conscience”.
A dog may be able to fully react and learn how to react with the exterior. But can it modify itself the way human brain can?
As per current psychology’s view, yes, even if to a smaller extent. There are problems with how we define conscience, and right now with LLMs most of the arguments are usually related to the Chinese room and philosophical zombie thought experiments, imo
Then why you just expressed in a statistical prediction manner?
You saw other people using that kind of language while being derogatory to someone they don’t like on the internet. You saw yourself in the same context and your brain statistically chose to use the same set of words that has been seen the most in this particular context. Literally chatgtp could have been given me your exact same answer if it would have been trained in your same echo chamber.
Have you ever debated with someone from the polar opposite political spectrum and complain that “they just repeat the same propaganda”? Doesn’t it sound like statistical predictions to you? Very simple those, there can be more complex one, but our simplest ways are the ones that define what are the basics of what we are made of.
If you at least would have given me a more complex expression you may had an argument (as humans our process could far more complex an hide a little what we seem to actually be doing it). But in instances like this one, when one person (you) responded with a so obvious statistical prediction on what is needed to be said in a particular complex just made my case. thanks.
But people who agree with my political ideology are considered and intelligent. People who disagree with me are stupider than chatgpt 3.5 and just say the same shit and can’t be reasoned with.
Human brains also do processing of audio, video, self learning, feelings, and many more that are definitely not statistical text. There are even people without “inner monologue” that function just fine
Some research does use LLM in combination with other AI to get better results overall, but purely LLM isn’t going to work.
But language is a big thing in the human intelligence and consciousness.
I don’t know, and I would assume that anyone really know. But people without internal monologue I have a feeling that they have it but they are not aware of it. Or maybe they talk so much that all the monologue is external.
Interesting you focus on language. Because that’s exactly what LLMs cannot understand. There’s no LLM that actually has a concept of the meaning of words. Here’s an excellent essay illustrating my point.
The fundamental problem is that deep learning ignores a core finding of cognitive science: sophisticated use of language relies upon world models and abstract representations. Systems like LLMs, which train on text-only data and use statistical learning to predict words, cannot understand language for two key reasons: first, even with vast scale, their training and data do not have the required information; and second, LLMs lack the world-modeling and symbolic reasoning systems that underpin the most important aspects of human language.
The data that LLMs rely upon has a fundamental problem: it is entirely linguistic. All LMs receive are streams of symbols detached from their referents, and all they can do is find predictive patterns in those streams. But critically, understanding language requires having a grasp of the situation in the external world, representing other agents with their emotions and motivations, and connecting all of these factors to syntactic structures and semantic terms. Since LLMs rely solely on text data that is not grounded in any external or extra-linguistic representation, the models are stuck within the system of language, and thus cannot understand it. This is the symbol grounding problem: with access to just formal symbol system, one cannot figure out what these symbols are connected to outside the system (Harnad, 1990). Syntax alone is not enough to infer semantics. Training on just the form of language can allow LLMs to leverage artifacts in the data, but “cannot in principle lead to the learning of meaning” (Bender & Koller, 2020). Without any extralinguistic grounding, LLMs will inevitably misuse words, fail to pick up communicative intents, and misunderstand language.
One of the most successful application of LLMs might actually be quite enlightening in that respect: Language translation. B2 level seems to be little issue for LLMs, large cracks can be seen in C1, and forget everything about C2: Things that require cultural context. Another area where they break down is spotting the need to reformulate, that’s actually B-level skills. Source: Open a random page on deepl.com that’s not in English.
Durch weniger Zeitaufwand beim Übersetzen und Lektorieren können Wissensarbeitende ihre Produktivität steigern, sodass sich Teams besser auf andere wichtige Aufgaben konzentrieren können.
“because less time required” cannot be a cause in idiomatic German, you’d say “by faster translating”. “knowledge workers”… why are we doing job descriptions, on top of that an abstract category? Someone is a translator when they translate things, not when that’s their job description. How about plain and simple “employees” or “workers”. Then, “knowledge workers can increase their productivity”? That’s an S-tier Americanism, why should knowledge workers care? Why bring people into it in the first place? In German thought work becoming easier is the sales pitch, not how much more employees can self-identify as a well-lubricated cog. “so that teams can better focus on other important tasks”? Why only teams? “improvements don’t apply if you’re working on your own?” The fuck have teams to do with anything you’re saying, American PR guy who wrote this?
…I’ll believe that deepl understands stuff once I can’t tell, at a fucking glance, that the original was written in English, in particular, US English.
But this “concepts” of things are built on the relation and iteration of this concepts with our brain.
A baby doesn’t born knowing that a table is a table. But they see a table, their parents say the word table, and they end up imprinting that what they have to say when they see that thing is the word table. That then they can relation with other things they know. I’ve watched some kids grow and learn how to talk lately and it’s pretty evident how repetition precedes understanding. Many kids will just repeat words that they parents said in certain situation when they happen to be in the same situation. It’s pretty obvious with small kids. But it’s a behavior you can also see a lot with adults, just repeating something they heard once they see that particular words fit the context
Also it’s interesting that language can actually influence the way concepts are constructed in the brain. For instance ancient greeks saw blue and green as the same colour, because they did only have one word for both colours.
I’m not sure if you’re disagreeing with the essay or not? But in any case what you’re describing is in the same vein, that is simply repeating a word without knowing what it actually means in context is exactly what LLMs do. They can get pretty good at getting it right most of the times but without actually being able to learn the concept and context of ‘table’ they will never be able to use it correctly 100% of the time. Or even more importantly for AGI apply reason and critical thinking. Much like a child repeating a word without much clue what it actually means.
Just for fun, this is what Gemini has to say:
Here’s a breakdown of why this “parrot-like” behavior hinders true AI:
Lack of Conceptual Grounding: LLMs excel at statistical associations. They learn to predict the next word in a sequence based on massive amounts of text data. However, this doesn’t translate to understanding the underlying meaning or implications of those words.
Limited Generalization: A child learning “table” can apply that knowledge to various scenarios – a dining table, a coffee table, a work table. LLMs struggle to generalize, often getting tripped up by subtle shifts in context or nuanced language.
Inability for Reasoning and Critical Thinking: True intelligence involves not just recognizing patterns but also applying logic, identifying cause and effect, and drawing inferences. LLMs, while impressive in their own right, fall short in these areas.
I mostly agree with it. What I’m saying is the understanding of the words come from the self dialogue made of those same words. How many times has a baby to repeat the word “mom” until they understand what a mother is? I think that without that previous repetition the more complex "understanding is impossible. That human understanding of concepts, especially the more complex concepts that make us humans, come from we being able to have a dialogue with ourselves and with other humans.
But this dialogue initiates as a Parrot, non-intelligent animals with brains that are very similar to ours are parrots. Small children are parrots (are even some adults). But it seems that after being a Parrot for some time it comes the ability to become an Human. That parrot is needed, and it also keeps itself in our consciousness. If you don’t put a lot of effort in your thoughts and says you’ll see that the Parrot is there, that you just express the most appropriate answer for that situation giving what you know.
The “understanding” of concepts seems just like a complex and big interconnection of Neural-Network-like outputs of different things (words, images, smells, sounds…). But language keeps feeling like the more important of those things for intelligent consciousness.
I have yet to read another article that other user posted that explained why the jump from Parrot to Human is impossible in current AI architecture. But at a glance it seems valid. But that does not invalidate the idea of Parrots being the genesis of Humans. Just that a different architecture is needed, and not in the statistical answer department, the article I was linked was more about size and topology of the “brain”.
A baby doesn’t learn concepts by repeating words over and certainly knows what a mother is before it has any label or language to articulate the concept. The label gets associated with the concept later and is not purely by parroting and indeed excessive parroting normally indicates speech development issues.
Many babies start saying mama, and papa, at barely 6 months.
Do you really and actually think that a 6-12 month infant have a concept in his mind of what a mother is, or what kind of relationship there is between they and their mother? Do they know what the reproductive process is? Do they also know the familiar relationship with their political-great-aunt or that comes casually at 15 months? One thing is object recognition, and even beings recognition, and one VERY different is consciousness. Many animals do recognize other beings (this I like, this I don’t like), but understanding what another being is… only humans. And not right as they are born, obviously.
There are amplitude of studies about why “mama” and “papa” are the most common first words. They are the easiest to pronounce. It’s not that they think “Oh I must require the attention of my mother I better call her right now, but I can’t quite remember her single name right now, better call her mama”. No, no. They are just making the sound that’s easier to them. And they get a positive reaction out of that sound they are making. Most times also that being that is closer to you and whom you feel attached is also making that sound, so you repeat, get positive reaction, keep repeating easy sound. It’s only later that they figure that the sound they are making actually refers to another being. And at the beginning is just a sound of recognition, that’s not a symbol of intelligence, some animals can make sounds of recognition. Excessive parroting would obviously mean issues as I said parroting is the first stage to human consciousness, if they are stuck there there’s obviously a problem. But without any parroting, then your baby do indeed have a big issue.
Only when there is a developed chain of thoughts in some kind of language the human starts really thinking, starts having what I call a consciousness (the ability to talk to yourself to modify your own behaviour). How would a being be able to talk to themselves to heavily modify some sensorial experience, or to modify your own behaviour if not with a speech of some sort.
I think we see this with one observation. Human beings are distinct to the rest of the animals because we have this ability (I’m into the assumption that you think that humans are the same or really close to the rest of animals). But an infant baby is not that different in behaviour than an animal. And it’s only later in time when they show this fundamental difference. So I think is safe to assume that this difference does not appear at conception or at birth, but some time after birth, it starts developing until is ready.
There are also plenty studies of development issues with deaf children ( https://www.deafchildrenaustralia.org.au/wp-content/uploads/2021/06/language-development-deaf-children.pdf ). It’s studied that deafness in children impairs development greatly, and that other means to introduce a language to them is fundamental for their development. If language would be not fundamental for the development of the human experience deaf children would not have problems, as you stated they’ll “naturally” learn concepts before they are introduced to the language to express those concepts. But this is proven false. And deaf children actually have severe issues learning concepts and understanding them at this early stages. And the remedy, of course, is to introduce language to them by other ways than talking. That’s why this issue is not shown on deaf children born from deaf parents, as parents are able to introduce language to they kids by other ways than sound speech.
language is a big thing in the human intelligence and consciousness.
But an LLM isn’t actually language. It’s numbers that represent tokens that build words. It doesn’t have the concept of a table, just the numerical weighting of other tokens related to “tab” & “le”.
I don’t know how to tell you this. But your brain does not have words imprinted in it…
The concept of this is, funnily enough, something that is being studied that derived from language. For instance ancient greeks did not distinguish between green and blue, as both colours had the same word.
Words are not imprinted, they are a series of electrical impulses that we learn over time. As a reference about the complain that LLM does not have words but tokens that represent values within the network.
And those impulses and how we generate them while we think are of great importance on out consciousness.
Free will vs determinism doesn’t have to do with religion.
I do think that the universe is deterministic and that humans (or any other being) do no have free will per se. In the sense that given the same state of the universe at some point the next states are determined and if it were to be repeated the evolution of the state of the universe would be the same.
Nothing to do with religion. Just with things not happening because of nothing, every action is consequence of another action, that includes all our brain impulses. I don’t think there are “souls” outside the state of the matter that could take decisions by themselves with determined.
But this is mostly philosophical of what “free will” means. Is it free will as long as you don’t know that the decision was already made from the very beginning?
What is your brain doing if not statistical text prediction?
The show Westworld portrayed it pretty good. The idea of jumping from text prediction to conscience doesn’t seem that unlikely. It’s basically text prediction on a loop with some exterior inputs to interact.
How to tell me you’re stuck in your head terminally online without telling me you’re stuck in your head terminally online.
But have something more to read.
Why being so rude?
Did you actually read the article or just googled until you find something that reinforced your prestablished opinion to use as a weapon against a person that you don’t even know?
I will actually read it. Probably the only one of us two who would do that.
If it’s convincing I may change my mind. I’m not a radical, like many other people are, and my opinions are subject to change.
Funny to me how defensive you got so quick, accusing of not reading the linked paper before even reading it yourself.
The reason OP was so rude is that your very premise of “what is the brain doing if not statistical text prediction” is completely wrong and you don’t even consider it could be. You cite a TV show as a source of how it might be. Your concept of what artificial intelligence is comes from media and not science, and is not founded in reality.
The brain uses words to describe thoughts, the words are not actually the thoughts themselves.
https://advances.massgeneral.org/neuro/journal.aspx?id=1096
Think about small children who haven’t learned language yet, do those brains still do “stastical text prediction” despite not having words to predict?
What about dogs and cats and other “less intelligent” creatures, they don’t use any words but we still can teach them to understand ideas. You don’t need to utter a single word, not even a sound, to train a dog to sit. Are they doing “statistical text prediction” ?
So agi is statistical emotion prediction we then assign logic to
Read other replies I gave on your same subject. I don’t want to repeat myself.
But words DO define thoughts, and I gave several examples. Some of them with kids. Precisely in kids you can see how language precedes actual thoughts. I will repeat myself a little here but you can clearly see how kids repeat a lot phrases that they just dont understand, just because their beautiful plastic brains heard the same phrase in the same context.
Dogs and cats are not proven to be conscious as a human being is. Precisely due the lack of an articulate language. Or maybe not just language but articulated thoughts. I think there may be a trend to humanize animals, mostly to give them more rights (even I think that a dog doesn’t need to have a intelligent consciousness for it to be bad to hit a dog), but I’m highly doubtful that dogs could develop a chain of thoughts that affects itself without external inputs, that seems a pretty important part of the consciousness experience.
The article you link is highly irrelevant (did you read it? Because I am also accusing you of not reading it as just being result of a quick google to try point your point using a fallacy of authority). The fact that spoken words are created by the brain (duh! Obviously, I don’t even know why the how the brain creates an articulated spoken word is even relevant here) does not imply that the brain does not also take form due to the words that it learns.
Giving an easier to understand example. For a classical printing press to print books, the words of those books needed to be loaded before in the press. And the press will only be able to print the letters that had been loaded into it.
the user I replied not also had read the article but they kindly summarize it to me. I will still read it. But its arguments on the impossibility of current LLM architectures to create consciousness are actually pretty good, and had actually put me on the way of being convinced of that. At least by the limitations spoken by the article.
Your analogy to mechanical systems are exactly where the breakdown to comparison with the human brain occurs, our brains are not like that, we don’t only have the blocks of text loaded into us, sure we only learn what we get exposed to but that doesn’t mean we can’t think of things we haven’t learned about.
The article I linked talks about the separation between the formation of thoughts and those thoughts being translated into words for linguistics.
The fact that you “don’t even know why the how the brain creates an articulated spoken word is even relevant here” speaks volumes to how much you understand the human brain, particularly in the context of artificial intelligence actually understanding the words it generates and the implications of thoughts behind the words and not just guessing which word comes next based on other words, the meanings of which are irrelevant.
I can listen to a song long enough to learn the words, that doesn’t mean I know what the song is about.
Can you think of a colour have you never seen? Could you imagine the colour green if you had never seen it?
The creative process is more modification than creation. taking some inputs, mixing them with other inputs and having an output that has parts of all out inputs, does it sound familiar? But without those input seems impossible to create an output.
And thus the importance of language in an actual intelligent consciousness. Without language the brain could only do direct modifications of the natural inputs, of external inputs. But with language the brain can take an external input, then transform it into a “language output” and immediately take that “language output” and read it as an input, process it, and go on. I think that’s the core concept that makes humans different from any other species, this middle thing that we can use to dialogue with ourselves and push our minds further. Not every human may have a constant inner monologue, but every human is capable to talking to themself, and will probably do when making a decision. Without language (language could take many forms, not just spoken language, but the more complex feels like it would be the better) I don’t know how this self influence process could take place.
It’s a basic argument of generative complexity, I found the article some years ago while trying to find an earlier one (I don’t think by the same author) that argued along the same complexity lines, essentially saying that if we worked like AI folks think we do we’d need to be so and so much trillion parameters and our brains would be the size of planets. That article talked about the need for context switching in generating (we don’t have access to our cooking skills while playing sportsball), this article talks about the necessity to be able to learn how to learn. Not just at the “adjust learning rate” level, but mechanisms that change the resulting coding, thereby creating different such contexts, or at least that’s where I see the connection between those two. In essence: To get to AGI we need AIs which can develop their own topology.
As to “rudeness”: Make sure to never visit the Netherlands. Usually how this goes is that I link the article and the AI faithful I pointed it out to goes on a denial spree… because if they a) are actually into the topic, not just bystanders and b) did not have some psychological need to believe (including “my retirement savings are in AI stock”) they c) would’ve come across the general argument themselves during their technological research. Or came up with it themselves, I’ve also seen examples of that: If you have a good intuition about complexity (and many programmers do) it’s not unlikely a shower thought to have. Not as fleshed out as in the article, of course.
That seems a very reasonable approach on the impossibility to achieve AGI with current models…
The first concept I was already kind of thinking about. Current LLM are incredibly inefficient. And it seems to be some theoretical barrier in efficiency that no model has been able to surpass. Giving that same answer that with the current model they would probably need to have trillions of parameters just to stop hallucinating. Not to say that to give them the ability to do more things that just answering question. As this supposedly AGI, even if only worked with word, it would need to be able to do more “types of conversations” that just being the answerer in a question-answer dialog.
But I had not thought of the need of repurpose the same are of the brain (biological or artificial) for doing different task on the go, if I have understood correctly. And it seems pretty clear that current models are unable to do that.
Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.
Getting a little poetical. I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.
Does a dog have the Buddha nature?
…meaning to say: Just because you happen to have the habit of identifying your consciousness with language (that’s TBH where the “stuck in your head” thing came from) doesn’t mean that language is necessary, or even a component of, consciousness, instead of merely an object of consciousness. And neither is consciousness necessary to do many things, e.g. I’m perfectly able to stop at a pedestrian light while lost in thought.
What Descartes actually was getting at is “I can’t doubt that I doubt, therefore, at least my doubt exists”. He had a bit of an existential crisis. Unsolicited Advice has a video about it.
It may be because of the habit.
But when I think of how to define a consciousness and divert it from instinct or reactiveness (like stopping at a red light). I think that something that makes a conscience a conscience must be that a conscience is able to modify itself without external influence.
A dog may be able to fully react and learn how to react with the exterior. But can it modify itself the way human brain can?
A human being can sit alone in a room and start processing information by itself in a loop and completely change that flux of information onto something different, even changing the brain in the process.
For this to happen I think some form of language, some form of “speak to yourself” is needed. Some way for the brain to generate an output that can be immediately be taken as input.
At this point of course this is far more philosophical than technical. And maybe even semantics of “what is a conscience”.
As per current psychology’s view, yes, even if to a smaller extent. There are problems with how we define conscience, and right now with LLMs most of the arguments are usually related to the Chinese room and philosophical zombie thought experiments, imo
Um, something wrong with your brain buddy? Because that’s definitely not at all how mine works.
Then why you just expressed in a statistical prediction manner?
You saw other people using that kind of language while being derogatory to someone they don’t like on the internet. You saw yourself in the same context and your brain statistically chose to use the same set of words that has been seen the most in this particular context. Literally chatgtp could have been given me your exact same answer if it would have been trained in your same echo chamber.
Have you ever debated with someone from the polar opposite political spectrum and complain that “they just repeat the same propaganda”? Doesn’t it sound like statistical predictions to you? Very simple those, there can be more complex one, but our simplest ways are the ones that define what are the basics of what we are made of.
If you at least would have given me a more complex expression you may had an argument (as humans our process could far more complex an hide a little what we seem to actually be doing it). But in instances like this one, when one person (you) responded with a so obvious statistical prediction on what is needed to be said in a particular complex just made my case. thanks.
But people who agree with my political ideology are considered and intelligent. People who disagree with me are stupider than chatgpt 3.5 and just say the same shit and can’t be reasoned with.
Human brains also do processing of audio, video, self learning, feelings, and many more that are definitely not statistical text. There are even people without “inner monologue” that function just fine
Some research does use LLM in combination with other AI to get better results overall, but purely LLM isn’t going to work.
Yep, of course. We do more things.
But language is a big thing in the human intelligence and consciousness.
I don’t know, and I would assume that anyone really know. But people without internal monologue I have a feeling that they have it but they are not aware of it. Or maybe they talk so much that all the monologue is external.
Interesting you focus on language. Because that’s exactly what LLMs cannot understand. There’s no LLM that actually has a concept of the meaning of words. Here’s an excellent essay illustrating my point.
One of the most successful application of LLMs might actually be quite enlightening in that respect: Language translation. B2 level seems to be little issue for LLMs, large cracks can be seen in C1, and forget everything about C2: Things that require cultural context. Another area where they break down is spotting the need to reformulate, that’s actually B-level skills. Source: Open a random page on deepl.com that’s not in English.
Like, this:
“because less time required” cannot be a cause in idiomatic German, you’d say “by faster translating”. “knowledge workers”… why are we doing job descriptions, on top of that an abstract category? Someone is a translator when they translate things, not when that’s their job description. How about plain and simple “employees” or “workers”. Then, “knowledge workers can increase their productivity”? That’s an S-tier Americanism, why should knowledge workers care? Why bring people into it in the first place? In German thought work becoming easier is the sales pitch, not how much more employees can self-identify as a well-lubricated cog. “so that teams can better focus on other important tasks”? Why only teams? “improvements don’t apply if you’re working on your own?” The fuck have teams to do with anything you’re saying, American PR guy who wrote this?
…I’ll believe that deepl understands stuff once I can’t tell, at a fucking glance, that the original was written in English, in particular, US English.
But this “concepts” of things are built on the relation and iteration of this concepts with our brain.
A baby doesn’t born knowing that a table is a table. But they see a table, their parents say the word table, and they end up imprinting that what they have to say when they see that thing is the word table. That then they can relation with other things they know. I’ve watched some kids grow and learn how to talk lately and it’s pretty evident how repetition precedes understanding. Many kids will just repeat words that they parents said in certain situation when they happen to be in the same situation. It’s pretty obvious with small kids. But it’s a behavior you can also see a lot with adults, just repeating something they heard once they see that particular words fit the context
Also it’s interesting that language can actually influence the way concepts are constructed in the brain. For instance ancient greeks saw blue and green as the same colour, because they did only have one word for both colours.
I’m not sure if you’re disagreeing with the essay or not? But in any case what you’re describing is in the same vein, that is simply repeating a word without knowing what it actually means in context is exactly what LLMs do. They can get pretty good at getting it right most of the times but without actually being able to learn the concept and context of ‘table’ they will never be able to use it correctly 100% of the time. Or even more importantly for AGI apply reason and critical thinking. Much like a child repeating a word without much clue what it actually means.
Just for fun, this is what Gemini has to say:
I mostly agree with it. What I’m saying is the understanding of the words come from the self dialogue made of those same words. How many times has a baby to repeat the word “mom” until they understand what a mother is? I think that without that previous repetition the more complex "understanding is impossible. That human understanding of concepts, especially the more complex concepts that make us humans, come from we being able to have a dialogue with ourselves and with other humans. But this dialogue initiates as a Parrot, non-intelligent animals with brains that are very similar to ours are parrots. Small children are parrots (are even some adults). But it seems that after being a Parrot for some time it comes the ability to become an Human. That parrot is needed, and it also keeps itself in our consciousness. If you don’t put a lot of effort in your thoughts and says you’ll see that the Parrot is there, that you just express the most appropriate answer for that situation giving what you know.
The “understanding” of concepts seems just like a complex and big interconnection of Neural-Network-like outputs of different things (words, images, smells, sounds…). But language keeps feeling like the more important of those things for intelligent consciousness.
I have yet to read another article that other user posted that explained why the jump from Parrot to Human is impossible in current AI architecture. But at a glance it seems valid. But that does not invalidate the idea of Parrots being the genesis of Humans. Just that a different architecture is needed, and not in the statistical answer department, the article I was linked was more about size and topology of the “brain”.
A baby doesn’t learn concepts by repeating words over and certainly knows what a mother is before it has any label or language to articulate the concept. The label gets associated with the concept later and is not purely by parroting and indeed excessive parroting normally indicates speech development issues.
Many babies start saying mama, and papa, at barely 6 months. Do you really and actually think that a 6-12 month infant have a concept in his mind of what a mother is, or what kind of relationship there is between they and their mother? Do they know what the reproductive process is? Do they also know the familiar relationship with their political-great-aunt or that comes casually at 15 months? One thing is object recognition, and even beings recognition, and one VERY different is consciousness. Many animals do recognize other beings (this I like, this I don’t like), but understanding what another being is… only humans. And not right as they are born, obviously.
There are amplitude of studies about why “mama” and “papa” are the most common first words. They are the easiest to pronounce. It’s not that they think “Oh I must require the attention of my mother I better call her right now, but I can’t quite remember her single name right now, better call her mama”. No, no. They are just making the sound that’s easier to them. And they get a positive reaction out of that sound they are making. Most times also that being that is closer to you and whom you feel attached is also making that sound, so you repeat, get positive reaction, keep repeating easy sound. It’s only later that they figure that the sound they are making actually refers to another being. And at the beginning is just a sound of recognition, that’s not a symbol of intelligence, some animals can make sounds of recognition. Excessive parroting would obviously mean issues as I said parroting is the first stage to human consciousness, if they are stuck there there’s obviously a problem. But without any parroting, then your baby do indeed have a big issue.
Only when there is a developed chain of thoughts in some kind of language the human starts really thinking, starts having what I call a consciousness (the ability to talk to yourself to modify your own behaviour). How would a being be able to talk to themselves to heavily modify some sensorial experience, or to modify your own behaviour if not with a speech of some sort.
I think we see this with one observation. Human beings are distinct to the rest of the animals because we have this ability (I’m into the assumption that you think that humans are the same or really close to the rest of animals). But an infant baby is not that different in behaviour than an animal. And it’s only later in time when they show this fundamental difference. So I think is safe to assume that this difference does not appear at conception or at birth, but some time after birth, it starts developing until is ready.
There are also plenty studies of development issues with deaf children ( https://www.deafchildrenaustralia.org.au/wp-content/uploads/2021/06/language-development-deaf-children.pdf ). It’s studied that deafness in children impairs development greatly, and that other means to introduce a language to them is fundamental for their development. If language would be not fundamental for the development of the human experience deaf children would not have problems, as you stated they’ll “naturally” learn concepts before they are introduced to the language to express those concepts. But this is proven false. And deaf children actually have severe issues learning concepts and understanding them at this early stages. And the remedy, of course, is to introduce language to them by other ways than talking. That’s why this issue is not shown on deaf children born from deaf parents, as parents are able to introduce language to they kids by other ways than sound speech.
But an LLM isn’t actually language. It’s numbers that represent tokens that build words. It doesn’t have the concept of a table, just the numerical weighting of other tokens related to “tab” & “le”.
I don’t know how to tell you this. But your brain does not have words imprinted in it…
The concept of this is, funnily enough, something that is being studied that derived from language. For instance ancient greeks did not distinguish between green and blue, as both colours had the same word.
You said
You also said
You need to pick an argument and stick to it.
what do you not understand?
Words are not imprinted, they are a series of electrical impulses that we learn over time. As a reference about the complain that LLM does not have words but tokens that represent values within the network.
And those impulses and how we generate them while we think are of great importance on out consciousness.
So, are words important to the human brain or not? You are not consistent.
Sorry, I don’t know how to make my answer more simple or clear. Sorry you did not understand what I wrote.
ok buddy
It’s “free will”. They chose to say what they wanted.
At least this is what the old religions teach. I don’t know what AI preachers you’re learning this nonsense from.
Church?
Free will vs determinism doesn’t have to do with religion.
I do think that the universe is deterministic and that humans (or any other being) do no have free will per se. In the sense that given the same state of the universe at some point the next states are determined and if it were to be repeated the evolution of the state of the universe would be the same.
Nothing to do with religion. Just with things not happening because of nothing, every action is consequence of another action, that includes all our brain impulses. I don’t think there are “souls” outside the state of the matter that could take decisions by themselves with determined.
But this is mostly philosophical of what “free will” means. Is it free will as long as you don’t know that the decision was already made from the very beginning?