We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        38
        ·
        edit-2
        4 days ago

        If we had direct control over how our tax dollars were spent, things would be different pretty fast. Might not be better, but different.

  • dalekcaan@lemm.ee
    link
    fedilink
    English
    arrow-up
    168
    ·
    4 days ago

    adding missing information and deleting errors

    Which is to say, “I’m sick of Grok accurately portraying me as an evil dipshit, so I’m going to feed it a bunch of right-wing talking points and get rid of anything that hurts my feelings.”

    • bean@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      That is definitely how I read it.

      History can’t just be ‘rewritten’ by A.I. and taken as truth. That’s fucking stupid.

  • maxfield@pf.z.org
    link
    fedilink
    English
    arrow-up
    115
    arrow-down
    2
    ·
    4 days ago

    The plan to “rewrite the entire corpus of human knowledge” with AI sounds impressive until you realize LLMs are just pattern-matching systems that remix existing text. They can’t create genuinely new knowledge or identify “missing information” that wasn’t already in their training data.

        • MajinBlayze@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          ·
          edit-2
          4 days ago

          Try rereading the whole tweet, it’s not very long. It’s specifically saying that they plan to “correct” the dataset using Grok, then retrain with that dataset.

          It would be way too expensive to go through it by hand

        • zqps@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          Yes.

          He wants to prompt grok to rewrite history according to his worldview, then retrain the model on that output.

    • WizardofFrobozz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      4 days ago

      To be fair, your brain is a pattern-matching system.

      When you catch a ball, you’re not doing the physics calculations in your head- you’re making predictions based on an enormous quantity of input. Unless you’re being very deliberate, you’re not thinking before you speak every word- your brain’s predictive processing takes over and you often literally speak before you think.

      Fuck LLMs- but I think it’s a bit wild to dismiss the power of a sufficiently advanced pattern-matching system.

    • zildjiandrummer1@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      7
      ·
      4 days ago

      Generally, yes. However, there have been some incredible (borderline “magic”) emergent generalization capabilities that I don’t think anyone was expecting.

      Modern AI is more than just “pattern matching” at this point. Yes at the lowest levels, sure that’s what it’s doing, but then you could also say human brains are just pattern matching at that same low level.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        English
        arrow-up
        9
        ·
        4 days ago

        Nothing that has been demonstrated makes me think these chatbots should be allowed to rewrite human history what the fuck?!

        • zildjiandrummer1@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          3 days ago

          That’s not what I said. It’s absolutely dystopian how Musk is trying to tailor his own reality.

          What I did say (and I’ve been doing AI research since the AlexNet days…) is that LLMs aren’t old school ML systems, and we’re at the point that simply scaling up to insane levels has yielded results that no one expected, but it was the lowest hanging fruit at the time. Few shot learning -> novel space generalization is very hard, so the easiest method was just take what is currently done and make it bigger (a la ResNet back in the day).

          Lemmy is almost as bad as reddit when it comes to hiveminds.

          • queermunist she/her@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 days ago

            You literally called it borderline magic.

            Don’t do that? They’re pattern recognition engines, they can produce some neat results and are good for niche tasks and interesting as toys, but they really aren’t that impressive. This “borderline magic” line is why they’re trying to shove these chatbots into literally everything, even though they aren’t good at most tasks.

  • Crikeste@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    3 days ago

    So they’re just going to fill it with Hitler’s world view, got it.

    Typical and expected.

  • finitebanjo@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    4 days ago

    “If we take this 0.84 accuracy model and train another 0.84 accuracy model on it that will make it a 1.68 accuracy model!”

    ~Fucking Dumbass

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    ·
    edit-2
    4 days ago

    I elaborated below, but basically Musk has no idea WTF he’s talking about.

    If I had his “f you” money, I’d at least try a diffusion or bitnet model (and open the weights for others to improve on), and probably 100 other papers I consider low hanging fruit, before this absolutely dumb boomer take.

    He’s such an idiot know it all. It’s so painful whenever he ventures into a field you sorta know.

    But he might just be shouting nonsense on Twitter while X employees actually do something different. Because if they take his orders verbatim they’re going to get crap models, even with all the stupid brute force they have.

  • Lumidaub@feddit.org
    link
    fedilink
    English
    arrow-up
    63
    ·
    edit-2
    4 days ago

    adding missing information

    Did you mean: hallucinate on purpose?

    Wasn’t he going to lay off the ketamine for a while?

    Edit: … i hadnt seen the More Context and now i need a fucking beer or twnety fffffffffu-

    • Carmakazi@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      ·
      4 days ago

      He means rewrite every narrative to his liking, like the benevolent god-sage he thinks he is.

    • BreadstickNinja@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      Yeah, let’s a technology already known for filling in gaps with invented nonsense and use that as our new training paradigm.

  • Deflated0ne@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    ·
    4 days ago

    Dude is gonna spend Manhattan Project level money making another stupid fucking shitbot. Trained on regurgitated AI Slop.

    Glorious.