• Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 days ago

    The AI haters will hate this, but I think AI is gonna provide the push that forces the fundamental changes we want. You can only replace so many people with AI and robots. The theoretical point of zero employees also means zero customers, because nobody has any money to buy anything, so making employees obsolete makes business and profits obsolete. In the real world the system will change long before that point, because it will have to. It might be from food riots and social breakdown, or political movements finally taking hold, I don’t know, but AI will make the profit system eat itself. I’m just not looking forward to the extremely difficult transition period.

        • jsomae@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          5 days ago

          In that case, you should know that Geoff Hinton (the guy whose lab kicked off the whole AI revolution last decade) quit Google in order to warn about the existential risk of AI. He believes there’s at least a 10% chance that it will kill us all within 30 years. Ilya Sutskever, his former student and co-founder of OpenAI, believes similarly, which is why he quit OpenAI and founded Safe Superintelligence (yes that basic html document really is their homepage) to help solve the alignment problem.

          You can also find popular rationalist AI pundits like gwern, acx, yudkowsky, etc. voicing similar concerns, with a range of P(doom) from low to the laughably high.

          • Lovable Sidekick@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            4 days ago

            Yes I know, the robot apocalypse people seem desperate to be afraid of is always just around the corner. Geoff Hinton, while a definite pioneer in AI, didn’t kick anything off, he was one of a large number of people working on it, and one of a small number predicting armageddon.

            • jsomae@lemmy.ml
              link
              fedilink
              arrow-up
              0
              ·
              4 days ago

              The reason it’s always just around the corner is because there is very strong evidence we’re approaching the singularity. Why do you sound sarcastic saying this? What probability would you assign to an AI apocalypse in the next three decades?

              Geoff Hinton absolutely kicked things off. Everybody else had given up on neural nets for image recognition, but his breakthrough renewed interest throughout the world. We wouldn’t have deepdreaming slugdogs without him.

              It should not be surprising that most people in the field of AI are not predicting armageddon, since it would be harmful to their careers to do so. Hinton is also not predicting the apocalypse – he’s saying 10-20% chance, which is actually a prediction that it won’t happen.

              • Lovable Sidekick@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 days ago

                I’m sarcastic because I would assign the same probability as a zombie apocalypse. At the nuts and bolts level I think they’re both technically flawed on a Hollywood fantasy level.

                What does an AI apocalypse even look like to you? Computers launching nuclear missiles or what? Shutting down power grids?

                • jsomae@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  edit-2
                  3 days ago

                  Please assign probabilities to the following (for the next 3 decades):

                  1. probability an AI smarter than any human on any intellectual task a human can do might come to exist (superintelligence);
                  2. given (1), probability it decides to kill all humans to achieve its goals (misaligned);
                  3. given (2), probability it is successful at killing all humans;

                  bonus: given 1 and 2, probability that we don’t even notice it wants to kill us, e.g. because we don’t know how to understand what it’s thinking.

                  Since the AI is smarter than me, I only need to propose one plausible method by which it could exterminate all humans. It can come up with a method at least as good as me, most likely something much better though. The typical answer here would be that it bio-engineers a lethal virus which is initially harmless (to avoid detection), but responds to some trigger like the introduction of a certain chemical or maybe a strong radio signal. If it’s very smart, and has a very good understanding of bioengineering, it should be able to produce a virus like this by paying a laboratory to e.g. perform some CRISPR operations on some existing bacteria strain (or even just mix some chemicals together if Sagan turns out to be right about bioengineering) and mail a sample somewhere. It can wait until everyone is infected before triggering the strain.