LLMs are neuronal networks! Yes they are trained with meaningful text to predict the following word, but they are still NN. And after they are trained with with human generated text they can also be further trained with other sources and in another way. Question is how an interaction between LLMs should be valuated. When does and LLM find one or a series of good words? I have not described this and I am also not sure what would be a good way to evaluate that.
Anyway I am sad now. I was looking forward to have some interesting discussions about LLMs. But all I get is down votes and comments like yours that tell me I am an idiot without telling me why.
Maybe I did not articulated my thoughts well enough. But it feels like people want to misinterpret what I’m saying.
Does this A in 18A stand for ångström? Can they even produce anything below 10 nm?