• 6 Posts
  • 138 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle









  • RuneScape

    I don’t personally play it anymore but it having still players to this day and thriving proves it.

    Edit: On second thought it’s probably Java Edition Minecraft. Thing spawn an industry around it. Now people’s livelyhood depend on it. A generation learning java programming just to make plugins or mods. I bet this also what increase the number of java developers. Probably inadvertently also introduced to Blender. Being accessable and sandbox made that possible.









  • That is interesting thanks for this. I’ll try address some of your questions let me know what you think?

    “what model do we use? Based on what data - since it is inherently biased? How often can we re-roll / regenerate an answer until we like its outcome? Who has oversight over it?”

    I imagine a government like this would not still be fully run by AI. Laws proposed would still have human touch perhaps what they would act is almost like an assistant per citizen. They would be briefed on the laws proposed and have the citizen vote for it, or if they give consent have the AI do it. Argue about it in the floor for them.

    In the end the president or whoever is at the very top who’s human still have the final say if he approves the proposed law.

    The model could be based on whatever is available today or the future or a curated model. Though I agree it being bias could be a huge blocker though us humans are also inherently bias maybe that is something we just need to be aware if such thing cannot be removed at all if we have this kind of government.

    If the law breaks the constitution for example there will still be the supreme court who are all humans declaring the law invalid.

    Rather than have a representative who may or may not be contacted depending how revelant your are to this human representative.

    “This is inherently flawed because it means that the existing chat history will sort of lead the future responses, but it’s incredibly limited due to context size requiring such vast amounts of vram / ram and processing power.”

    Won’t that be ideal that would mean this LLM inherently knows your choices or belief, aside from the huge increase in processing needed. If the person decide his AI assistant no longer aligns with his view he can then correct it.