• 1 Post
  • 222 Comments
Joined 8 months ago
cake
Cake day: April 30th, 2024

help-circle
  • I mostly agree with it. What I’m saying is the understanding of the words come from the self dialogue made of those same words. How many times has a baby to repeat the word “mom” until they understand what a mother is? I think that without that previous repetition the more complex "understanding is impossible. That human understanding of concepts, especially the more complex concepts that make us humans, come from we being able to have a dialogue with ourselves and with other humans. But this dialogue initiates as a Parrot, non-intelligent animals with brains that are very similar to ours are parrots. Small children are parrots (are even some adults). But it seems that after being a Parrot for some time it comes the ability to become an Human. That parrot is needed, and it also keeps itself in our consciousness. If you don’t put a lot of effort in your thoughts and says you’ll see that the Parrot is there, that you just express the most appropriate answer for that situation giving what you know.

    The “understanding” of concepts seems just like a complex and big interconnection of Neural-Network-like outputs of different things (words, images, smells, sounds…). But language keeps feeling like the more important of those things for intelligent consciousness.

    I have yet to read another article that other user posted that explained why the jump from Parrot to Human is impossible in current AI architecture. But at a glance it seems valid. But that does not invalidate the idea of Parrots being the genesis of Humans. Just that a different architecture is needed, and not in the statistical answer department, the article I was linked was more about size and topology of the “brain”.


  • Then why you just expressed in a statistical prediction manner?

    You saw other people using that kind of language while being derogatory to someone they don’t like on the internet. You saw yourself in the same context and your brain statistically chose to use the same set of words that has been seen the most in this particular context. Literally chatgtp could have been given me your exact same answer if it would have been trained in your same echo chamber.

    Have you ever debated with someone from the polar opposite political spectrum and complain that “they just repeat the same propaganda”? Doesn’t it sound like statistical predictions to you? Very simple those, there can be more complex one, but our simplest ways are the ones that define what are the basics of what we are made of.

    If you at least would have given me a more complex expression you may had an argument (as humans our process could far more complex an hide a little what we seem to actually be doing it). But in instances like this one, when one person (you) responded with a so obvious statistical prediction on what is needed to be said in a particular complex just made my case. thanks.


  • but that doesn’t mean we can’t think of things we haven’t learned about.

    Can you think of a colour have you never seen? Could you imagine the colour green if you had never seen it?

    The creative process is more modification than creation. taking some inputs, mixing them with other inputs and having an output that has parts of all out inputs, does it sound familiar? But without those input seems impossible to create an output.

    And thus the importance of language in an actual intelligent consciousness. Without language the brain could only do direct modifications of the natural inputs, of external inputs. But with language the brain can take an external input, then transform it into a “language output” and immediately take that “language output” and read it as an input, process it, and go on. I think that’s the core concept that makes humans different from any other species, this middle thing that we can use to dialogue with ourselves and push our minds further. Not every human may have a constant inner monologue, but every human is capable to talking to themself, and will probably do when making a decision. Without language (language could take many forms, not just spoken language, but the more complex feels like it would be the better) I don’t know how this self influence process could take place.


  • It may be because of the habit.

    But when I think of how to define a consciousness and divert it from instinct or reactiveness (like stopping at a red light). I think that something that makes a conscience a conscience must be that a conscience is able to modify itself without external influence.

    A dog may be able to fully react and learn how to react with the exterior. But can it modify itself the way human brain can?

    A human being can sit alone in a room and start processing information by itself in a loop and completely change that flux of information onto something different, even changing the brain in the process.

    For this to happen I think some form of language, some form of “speak to yourself” is needed. Some way for the brain to generate an output that can be immediately be taken as input.

    At this point of course this is far more philosophical than technical. And maybe even semantics of “what is a conscience”.


  • Read other replies I gave on your same subject. I don’t want to repeat myself.

    But words DO define thoughts, and I gave several examples. Some of them with kids. Precisely in kids you can see how language precedes actual thoughts. I will repeat myself a little here but you can clearly see how kids repeat a lot phrases that they just dont understand, just because their beautiful plastic brains heard the same phrase in the same context.

    Dogs and cats are not proven to be conscious as a human being is. Precisely due the lack of an articulate language. Or maybe not just language but articulated thoughts. I think there may be a trend to humanize animals, mostly to give them more rights (even I think that a dog doesn’t need to have a intelligent consciousness for it to be bad to hit a dog), but I’m highly doubtful that dogs could develop a chain of thoughts that affects itself without external inputs, that seems a pretty important part of the consciousness experience.

    The article you link is highly irrelevant (did you read it? Because I am also accusing you of not reading it as just being result of a quick google to try point your point using a fallacy of authority). The fact that spoken words are created by the brain (duh! Obviously, I don’t even know why the how the brain creates an articulated spoken word is even relevant here) does not imply that the brain does not also take form due to the words that it learns.

    Giving an easier to understand example. For a classical printing press to print books, the words of those books needed to be loaded before in the press. And the press will only be able to print the letters that had been loaded into it.

    the user I replied not also had read the article but they kindly summarize it to me. I will still read it. But its arguments on the impossibility of current LLM architectures to create consciousness are actually pretty good, and had actually put me on the way of being convinced of that. At least by the limitations spoken by the article.


  • That seems a very reasonable approach on the impossibility to achieve AGI with current models…

    The first concept I was already kind of thinking about. Current LLM are incredibly inefficient. And it seems to be some theoretical barrier in efficiency that no model has been able to surpass. Giving that same answer that with the current model they would probably need to have trillions of parameters just to stop hallucinating. Not to say that to give them the ability to do more things that just answering question. As this supposedly AGI, even if only worked with word, it would need to be able to do more “types of conversations” that just being the answerer in a question-answer dialog.

    But I had not thought of the need of repurpose the same are of the brain (biological or artificial) for doing different task on the go, if I have understood correctly. And it seems pretty clear that current models are unable to do that.

    Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.

    Getting a little poetical. I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.


  • But this “concepts” of things are built on the relation and iteration of this concepts with our brain.

    A baby doesn’t born knowing that a table is a table. But they see a table, their parents say the word table, and they end up imprinting that what they have to say when they see that thing is the word table. That then they can relation with other things they know. I’ve watched some kids grow and learn how to talk lately and it’s pretty evident how repetition precedes understanding. Many kids will just repeat words that they parents said in certain situation when they happen to be in the same situation. It’s pretty obvious with small kids. But it’s a behavior you can also see a lot with adults, just repeating something they heard once they see that particular words fit the context

    Also it’s interesting that language can actually influence the way concepts are constructed in the brain. For instance ancient greeks saw blue and green as the same colour, because they did only have one word for both colours.














  • I’m glad you have the coherence to had complaint about classical algorithms used to do exactly the same before. I remember you saying ““Java is just a tool” mf when it’s used to degrade society in every way imaginable”, after knowing how NSA used java written programs to spy on everyone. Very glad to know you are not one of those people how cannot justify their grudge only against AI tools, and completely shut their mouth to many others unethical uses of many other technologies.

    In a more serious note. Sad to say that under capitalism every tool will be used to degrade society in every way imaginable. Going after the technologies is useless and counterproductive, the enemy is capitalism, not any particular technology. I know that the campaign of etsy artists against AI was strong, but we really need to look further into the analysis of what’s going on than “AI bad because I no longer can charge 200€ commission to some poor people that does not know how to paint for bad sonic fanpics” or “AI is bad because I got denied the change of massively own Intelectual Private Property and became a billionaire just sitting on my IPs while being paid for anyone using AI trained with my Private Property”. Analysis should go further. We should question why even people need those side hustles for a living, on why we allow companies use AI tools, or any other tools, for profiting out of society. It would make no sense to just go against AI, when what’s hurting people is not AI or any other particular technology, is bad people.