Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • november@lemmy.vg
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 days ago

    I want people to figure out how to think for themselves and create for themselves without leaning on a glorified Markov chain. That’s what I want.

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 days ago

      AI people always want to ignore the environmental damage as well…

      Like all that electricity and water are just super abundant things humans have plenty of.

      Everytime some idiot asks AI instead of googling it themselves the planet gets a little more fucked

      • garbagebagel@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        9 days ago

        This is my #1 issue with it. My work is super pushing AI. The other day I was trying to show a colleague how to do something in teams and as I’m trying to explain to them (and they’re ignoring where I’m telling them to click) they were like “you know, this would be a great use of AI to figure it out!”.

        I said no and asked them to give me their fucking mouse.

        People are really out there fucking with extremely powerful wasteful AI for something as stupid as that.

    • anomnom@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 days ago

      Maybe if the actual costs—especially including environmental costs from its energy use—were included in each query, we’d start thinking for ourselves again. It’s not worth it for most things it’s used for at the moment

    • nimpnin@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      10 days ago

      People haven’t ”thought for themselves” since the printing press was invented. You gotta be more specific than that.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        10 days ago

        Ah, yes, the 14th century. That renowned period of independent critical thought and mainstream creativity. All downhill from there, I tell you.

        • nimpnin@sopuli.xyz
          link
          fedilink
          arrow-up
          1
          ·
          10 days ago

          Independent thought? All relevant thought is highly dependent of other people and their thoughts.

          That’s exactly why I bring this up. Having systems that teach people to think in a similar way enable us to build complex stuff and have a modern society.

          That’s why it’s really weird to hear this ”people should think for themselves” criticism of AI. It’s a similar justification to antivaxxers saying you ”should do your own research”.

          Surely there are better reasons to oppose AI?

          • Soleos@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            10 days ago

            The usage of “independent thought” has never been “independent of all outside influence”, it has simply meant going through the process of reasoning–thinking through a chain of logic–instead of accepting and regurgitating the conclusions of others without any of one’s own reasoning. It’s a similar lay meaning as being an independent adult. We all rely on others in some way, but an independent adult can usually accomplish activities of daily living through their own actions.

            • nimpnin@sopuli.xyz
              link
              fedilink
              arrow-up
              0
              ·
              10 days ago

              Yeah but that’s not what we are expecting people to do.

              In our extremely complicated world, most thinking relies on trusting sources. You can’t independently study and derive most things.

              Otherwise everybody should do their own research about vaccines. But the reasonable thing is to trust a lot of other, more knowledgeable people.

              • Soleos@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                edit-2
                9 days ago

                My comment doesn’t suggest people have to run their own research study or develop their own treatise on every topic. It suggests people have make a conscious choice, preferably with reasonable judgment, about which sources to trust and to develop a lay understanding of the argument or conclusion they’re repeating. Otherwise you end up with people on the left and right reflexively saying “communism bad” or “capitalism bad” because their social media environment repeats it a lot, but they’d be hard pressed to give even a loosly representative definition of either.

                • nimpnin@sopuli.xyz
                  link
                  fedilink
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  9 days ago

                  This has very little to do with the criticism given by the first commenter. And you can use AI and do this, they are not in any way exclusive.

    • Libra00@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      10 days ago

      So your argument against AI is that it’s making us dumb? Just like people have claimed about every technology since the invention of writing? The essence of the human experience is change, we invent new tools and then those tools change how we interact with the world, that’s how it’s always been, but there have always been people saying the internet is making us dumb, or the TV, or books, or whatever.

      • november@lemmy.vg
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        Get back to me after you have a few dozen conversations with people who openly say “Well I asked ChatGPT and it said…” without providing any actual input of their own.

        • Libra00@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          10 days ago

          Oh, you mean like people have been saying about books for 500+ years?

          • Cethin@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            10 days ago

            Not remotely the same thing. Books almost always have context on what they are, like having an author listed, and hopefully citations if it’s about real things. You can figure out more about it. LLMs create confident sounding outputs that are just predictions of what an output should look like based on the input. It didn’t reason and doesn’t tell you how it generated its response.

            The problem is LLMs are sold to people as Artifical Intelligence, so it sounds like it’s smart. In actuality, it doesn’t think at all. It just generates confident sounding results. It’s literally companies selling con(fidence) men as a product, and people fully trust these con men.

            • Libra00@lemmy.ml
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              10 days ago

              Yeah, nobody has ever written a book that’s full of bullshit, bad arguments, and obvious lies before, right?

              Obviously anyone who uses any technology needs to be aware of the limitations and pitfalls, but to imagine that this is some entirely new kind of uniquely-harmful thing is to fail to understand the history of technology and society’s responses to it.

  • BertramDitore@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 days ago

    I want real, legally-binding regulation, that’s completely agnostic about the size of the company. OpenAI, for example, needs to be regulated with the same intensity as a much smaller company. And OpenAI should have no say in how they are regulated.

    I want transparent and regular reporting on energy consumption by any AI company, including where they get their energy and how much they pay for it.

    Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

    Every step of any deductive process needs to be citable and traceable.

    • DomeGuy@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      10 days ago

      Clear reporting should include not just the incremental environmental cost of each query, but also a statement of the invested cost in the underlying training.

    • Maeve@kbin.earth
      link
      fedilink
      arrow-up
      1
      ·
      10 days ago

      Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

      Their creators can’t even keep them from deliberately lying.

    • davidgro@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      10 days ago

      … I want clear evidence that the LLM … will never hallucinate or make something up.

      Nothing else you listed matters: That one reduces to “Ban all Generative AI”. Actually worse than that, it’s “Ban all machine learning models”.

      • BertramDitore@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        Let’s say I open a medical textbook a few different times to find the answer to something concrete, and each time the same reference material leads me to a different answer but every answer it provides is wrong but confidently passes it off as right. Then yes, that medical textbook should be banned.

        Quality control is incredibly important, especially when people will use these systems to make potentially life-changing decisions for them.

      • mosiacmango@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        10 days ago

        If “they have to use good data and actually fact check what they say to people” kills “all machine leaning models” then it’s a death they deserve.

        The fact is that you can do the above, it’s just much, much harder (you have to work with data from trusted sources), much slower (you have to actually validate that data), and way less profitable (your AI will be able to reply to way less questions) then pretending to be the “answer to everything machine.”

        • Redex@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          10 days ago

          The way generative AI works means no matter how good the data it’s still gonna bullshit and lie, it won’t “know” if it knows something or not. It’s a chaotic process, no ML algorithm has ever produced 100% correct results.

          • mosiacmango@lemm.ee
            link
            fedilink
            arrow-up
            0
            ·
            9 days ago

            That’s how they work now, trained with bad data and designed to always answer with some kind of positive response.

            They absolutely can be trained on actual data, trained to give less confident answers, and have an error checking process run on their output after they formulate an answer.

            • davidgro@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              9 days ago

              There’s no such thing as perfect data. Especially if there’s even the slightest bit of subjectivity involved.

              Even less existent is complete data.

              • mosiacmango@lemm.ee
                link
                fedilink
                arrow-up
                1
                ·
                9 days ago

                Perfect? Who said anything about perfect data? I said actually fact checked data. You keep movimg the bar on what possible as an excuse to not even try.

                They could indeed build models that worked on actual data from expert sources, and then have their agents check those sources for more correct info when they create an answer. They don’t want to, for all the same reasons I’ve already stated.

                It’s possible, it does not “doom” LLM, it just massively increases its accuracy and actual utility at the cost of money, effort and killing the VC hype cycle.

  • GregorGizeh@lemmy.zip
    link
    fedilink
    arrow-up
    1
    ·
    10 days ago

    Wishful thinking? Models trained on illegal data get confiscated, the companies dissolved, the ceos and board members made liable for the damages.

    Then a reframing of these bs devices from ai to what they actually do: brew up statistical probability amalgamations of their training data, and then use them accordingly. They arent worthless or useless, they are just being shoved into functions they cannot perform in the name of cost cutting.

  • TimLovesTech (AuDHD)(he/him)@badatbeing.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 days ago

    I think the AI that helps us find/diagnose/treat diseases is great, and the model should be open to all in the medical field (open to everyone I feel would be easily abused by scammers and cause a lot of unnecessary harm - essentially if you can’t validate what it finds you shouldn’t be using it).

    I’m not a fan of these next gen IRC chat bots that have companies hammering sites all over the web to siphon up data it shouldn’t be allowed to. And then pushing these boys into EVERYTHING! And like I saw a few mention, if their bots have been trained on unauthorized data sets they should be forced to open source their models for the good of the people (since that is the BS reason openai has been bending and breaking the rules).

    • grasshopper_mouse@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      10 days ago

      That’s what I’d like to see more of, too – Use it to cure fucking cancer already. Make it free to the legit medical institutions, train doctors how to use it. I feel like we’re sitting on a goldmine and all we’re doing with it is stealing other people’s intellectual property and making porn and shitty music.

  • MochiGoesMeow@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 days ago

    Im not a fan of AI because I think the premise of analyzing and absorbing work without consent from creators at its core is bullshit.

    I also think that AI is another step into government spying in a more efficient manner.

    Since AI learns from human content without consent, I think government should figure out how to socialize the profits. (Probably will never happen)

    Also they should regulate how data is stored, and ensure to have videos clearly labeled if made from AI.

    They also have to be careful and protect victims from revenge porn or general content and make sure people are held accountable.

  • traches@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 days ago

    I just want my coworkers to stop dumping ai slop in my inbox and expecting me to take it seriously.

  • justOnePersistentKbinPlease@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    10 days ago

    They have to pay for every copyrighted material used in the entire models whenever the AI is queried.

    They are only allowed to use data that people opt into providing.

    • Bob Robertson IX @discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      There’s no way that’s even feasible. Instead, AI models trained on pubically available data should be considered part of the public domain. So, any images that anyone can go and look at without a barrier in the way, would be fair game, but the model would be owned by the public.

    • venusaur@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      This definitely relates to moral concerns. Are there other examples like this of a company that is allowed to profit off of other people’s content without paying or citing them?

  • audaxdreik@pawb.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 days ago

    If we’re talking realm of pure fantasy: destroy it.

    I want you to understand this is not AI sentiment as a whole, I understand why the idea is appealing, how it could be useful, and in some ways may seem inevitable.

    But a lot of sci-fi doesn’t really address the run up to AI, in fact a lot of it just kind of assumes there’ll be an awakening one day. What we have right now is an unholy, squawking abomination that has been marketed to nefarious ends and never should have been trusted as far as it has. Think real hard about how corporations are pushing the development and not academia.

    Put it out of its misery.

  • HeartyOfGlass@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 days ago

    My fantasy is for “everyone” to realize there’s absolutely nothing “intelligent” about current AI. There is no rationalization. It is incapable of understanding & learning.

    ChatGPT et al are search engines. That’s it. It’s just a better Google. Useful in certain situations, but pretending it’s “intelligent” is outright harmful. It’s harmful to people who don’t understand that & take its answers at face value. It’s harmful to business owners who buy into the smoke & mirrors. It’s harmful to the future of real AI.

    It’s a fad. Like NFTs and Bitcoin. It’ll have its die-hard fans, but we’re already seeing the cracks - it’s absorbed everything humanity’s published online & it still can’t write a list of real book recommendations. Kids using it to “vibe code” are learning how useless it is for real projects.

  • Justdaveisfine@midwest.social
    link
    fedilink
    arrow-up
    1
    ·
    10 days ago

    I would likely have different thoughts on it if I (and others) was able to consent my data into training it, or consent to even have it rather than it just showing up in an unwanted update.

  • DomeGuy@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    10 days ago

    Honestly, at this point I’d settle for just “AI cannot be bundled with anything else.”

    Neither my cell phone nor TV nor thermostat should ever have a built-in LLM “feature” that sends data to an unknown black box on somebody else’s server.

    (I’m all down for killing with fire and debt any model built on stolen inputs,.too. OpenAI should be put in a hole so deep that they’re neighbors with Napster.)

  • SinningStromgald@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    10 days ago

    Ideally the whole house of cards crumbles and AI goes the way of 3D TV’s, for now. The world as it is now is not ready for AGI. We would quickly end up in a " I have no mouth and I must scream" scenario.

    Otherwise, what everyone else has posted are good starting points. I would just add that any data centers used for AI have to be powered 100% by renewable energy.

  • subignition@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    10 days ago

    Training data needs to be 100% traceable and licensed appropriately.

    Energy usage involved in training and running the model needs to be 100% traceable and some minimum % of renewable (if not 100%).

    Any model whose training includes data in the public domain should itself become public domain.

    And while we’re at it we should look into deliberately taking more time at lower clock speeds to try to reduce or eliminate the water usage gone to cooling these facilities.