Lots of people on Lemmy really dislike AI’s current implementations and use cases.
I’m trying to understand what people would want to be happening right now.
Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?
Thanks for the discourse. Please keep it civil, but happy to be your punching bag.
I want people to figure out how to think for themselves and create for themselves without leaning on a glorified Markov chain. That’s what I want.
AI people always want to ignore the environmental damage as well…
Like all that electricity and water are just super abundant things humans have plenty of.
Everytime some idiot asks AI instead of googling it themselves the planet gets a little more fucked
This is my #1 issue with it. My work is super pushing AI. The other day I was trying to show a colleague how to do something in teams and as I’m trying to explain to them (and they’re ignoring where I’m telling them to click) they were like “you know, this would be a great use of AI to figure it out!”.
I said no and asked them to give me their fucking mouse.
People are really out there fucking with extremely powerful wasteful AI for something as stupid as that.
Maybe if the actual costs—especially including environmental costs from its energy use—were included in each query, we’d start thinking for ourselves again. It’s not worth it for most things it’s used for at the moment
People haven’t ”thought for themselves” since the printing press was invented. You gotta be more specific than that.
Ah, yes, the 14th century. That renowned period of independent critical thought and mainstream creativity. All downhill from there, I tell you.
Independent thought? All relevant thought is highly dependent of other people and their thoughts.
That’s exactly why I bring this up. Having systems that teach people to think in a similar way enable us to build complex stuff and have a modern society.
That’s why it’s really weird to hear this ”people should think for themselves” criticism of AI. It’s a similar justification to antivaxxers saying you ”should do your own research”.
Surely there are better reasons to oppose AI?
The usage of “independent thought” has never been “independent of all outside influence”, it has simply meant going through the process of reasoning–thinking through a chain of logic–instead of accepting and regurgitating the conclusions of others without any of one’s own reasoning. It’s a similar lay meaning as being an independent adult. We all rely on others in some way, but an independent adult can usually accomplish activities of daily living through their own actions.
Yeah but that’s not what we are expecting people to do.
In our extremely complicated world, most thinking relies on trusting sources. You can’t independently study and derive most things.
Otherwise everybody should do their own research about vaccines. But the reasonable thing is to trust a lot of other, more knowledgeable people.
My comment doesn’t suggest people have to run their own research study or develop their own treatise on every topic. It suggests people have make a conscious choice, preferably with reasonable judgment, about which sources to trust and to develop a lay understanding of the argument or conclusion they’re repeating. Otherwise you end up with people on the left and right reflexively saying “communism bad” or “capitalism bad” because their social media environment repeats it a lot, but they’d be hard pressed to give even a loosly representative definition of either.
This has very little to do with the criticism given by the first commenter. And you can use AI and do this, they are not in any way exclusive.
So your argument against AI is that it’s making us dumb? Just like people have claimed about every technology since the invention of writing? The essence of the human experience is change, we invent new tools and then those tools change how we interact with the world, that’s how it’s always been, but there have always been people saying the internet is making us dumb, or the TV, or books, or whatever.
Get back to me after you have a few dozen conversations with people who openly say “Well I asked ChatGPT and it said…” without providing any actual input of their own.
Oh, you mean like people have been saying about books for 500+ years?
Not remotely the same thing. Books almost always have context on what they are, like having an author listed, and hopefully citations if it’s about real things. You can figure out more about it. LLMs create confident sounding outputs that are just predictions of what an output should look like based on the input. It didn’t reason and doesn’t tell you how it generated its response.
The problem is LLMs are sold to people as Artifical Intelligence, so it sounds like it’s smart. In actuality, it doesn’t think at all. It just generates confident sounding results. It’s literally companies selling con(fidence) men as a product, and people fully trust these con men.
Yeah, nobody has ever written a book that’s full of bullshit, bad arguments, and obvious lies before, right?
Obviously anyone who uses any technology needs to be aware of the limitations and pitfalls, but to imagine that this is some entirely new kind of uniquely-harmful thing is to fail to understand the history of technology and society’s responses to it.
I want real, legally-binding regulation, that’s completely agnostic about the size of the company. OpenAI, for example, needs to be regulated with the same intensity as a much smaller company. And OpenAI should have no say in how they are regulated.
I want transparent and regular reporting on energy consumption by any AI company, including where they get their energy and how much they pay for it.
Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.
Every step of any deductive process needs to be citable and traceable.
Clear reporting should include not just the incremental environmental cost of each query, but also a statement of the invested cost in the underlying training.
… I want clear evidence that the LLM … will never hallucinate or make something up.
Nothing else you listed matters: That one reduces to “Ban all Generative AI”. Actually worse than that, it’s “Ban all machine learning models”.
Let’s say I open a medical textbook a few different times to find the answer to something concrete, and each time the same reference material leads me to a different answer but every answer it provides is wrong but confidently passes it off as right. Then yes, that medical textbook should be banned.
Quality control is incredibly important, especially when people will use these systems to make potentially life-changing decisions for them.
If “they have to use good data and actually fact check what they say to people” kills “all machine leaning models” then it’s a death they deserve.
The fact is that you can do the above, it’s just much, much harder (you have to work with data from trusted sources), much slower (you have to actually validate that data), and way less profitable (your AI will be able to reply to way less questions) then pretending to be the “answer to everything machine.”
The way generative AI works means no matter how good the data it’s still gonna bullshit and lie, it won’t “know” if it knows something or not. It’s a chaotic process, no ML algorithm has ever produced 100% correct results.
That’s how they work now, trained with bad data and designed to always answer with some kind of positive response.
They absolutely can be trained on actual data, trained to give less confident answers, and have an error checking process run on their output after they formulate an answer.
There’s no such thing as perfect data. Especially if there’s even the slightest bit of subjectivity involved.
Even less existent is complete data.
Perfect? Who said anything about perfect data? I said actually fact checked data. You keep movimg the bar on what possible as an excuse to not even try.
They could indeed build models that worked on actual data from expert sources, and then have their agents check those sources for more correct info when they create an answer. They don’t want to, for all the same reasons I’ve already stated.
It’s possible, it does not “doom” LLM, it just massively increases its accuracy and actual utility at the cost of money, effort and killing the VC hype cycle.
Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.
Their creators can’t even keep them from deliberately lying.
Exactly.
If we’re going pie in the sky I would want to see any models built on work they didn’t obtain permission for to be shut down.
Failing that, any models built on stolen work should be released to the public for free.
This is the best solution. Also, any use of AI should have to be stated and watermarked. If they used someone’s art, that artist has to be listed as a contributor and you have to get permission. Just like they do for every film, they have to give credit. This includes music, voice and visual art. I don’t care if they learned it from 10,000 people, list them.
Genuine curiosity. Not an attack. Did you download music illegally back in the day? Or torrent things? Do you feel the same about those copyrighted materials?
Nah not really. I think piracy is a complex issue though, with far less wide reaching collateral damage. I wouldn’t compare the two, personally.
Definitely need copyright laws. What if everything has to be watermarked in some way and it’s illegal to use AI generated content for commercial use unless permitted by creators?
The problem with trying to police the output is there isn’t a surefire way to detect the fact it’s generated. That’s why I prefer targeting the companies who created the problematic models.
But let’s say the model is released for free but people use it for commercial purposes. It seems the only solution is to mandate that all content a model is trained on and accesses has provided express permission or is original content. Nobody can release a model to the public which generates content based on “illegal” material.
I agree with that I think.
If we’re going pie in the sky I would want to see any models built on work they didn’t obtain permission for to be shut down.
I’m going to ask the tough question: Why?
Search engines work because they can download and store everyone’s copyrighted works without permission. If you take away that ability, we’d all lose the ability to search the Internet.
Copyright law lets you download whatever TF you want. It isn’t until you distribute said copyrighted material that you violate copyright law.
Before generative AI, Google screwed around internally with all those copyrighted works in dozens of different ways. They never asked permission from any of those copyright holders.
Why is that OK but doing the same with generative AI is not? I mean, really think about it! I’m not being ridiculous here, this is a serious distinction.
If OpenAI did all the same downloading of copyrighted content as Google and screwed around with it internally to train AI then never released a service to the public would that be different?
If I’m an artist that makes paintings and someone pays me to copy someone else’s copyrighted work. That’s on me to make sure I don’t do that. It’s not really the problem of the person that hired me to do it unless they distribute the work.
However, if I use a copier to copy a book then start selling or giving away those copies that’s my problem: I would’ve violated copyright law. However, is it Xerox’s problem? Did they do anything wrong by making a device that can copy books?
If you believe that it’s not Xerox’s problem then you’re on the side of the AI companies. Because those companies that make LLMs available to the public aren’t actually distributing copyrighted works. They are, however, providing a tool that can do that (sort of). Just like a copier.
If you paid someone to study a million books and write a novel in the style of some other author you have not violated any law. The same is true if you hire an artist to copy another artist’s style. So why is it illegal if an AI does it? Why is it wrong?
My argument is that there’s absolutely nothing illegal about it. They’re clearly not distributing copyrighted works. Not intentionally, anyway. That’s on the user. If someone constructs a prompt with the intention of copying something as closely as possible… To me, that is no different than walking up to a copier with a book. You’re using a general-purpose tool specifically to do something that’s potentially illegal.
So the real question is this: Do we treat generative AI like a copier or do we treat it like an artist?
If you’re just angry that AI is taking people’s jobs say that! Don’t beat around the bush with nonsense arguments about using works without permission… Because that’s how search engines (and many other things) work. When it comes to using copyrighted works, not everything requires consent.
Search engines work because they can download and store everyone’s copyrighted works without permission. If you take away that ability, we’d all lose the ability to search the Internet.
No they don’t. They index the content of the page and score its relevance and reliability, and still provide the end user with the actual original information
However, if I use a copier to copy a book then start selling or giving away those copies that’s my problem: I would’ve violated copyright law. However, is it Xerox’s problem? Did they do anything wrong by making a device that can copy books?
This is false equivalence
LLMs do not wholesale reproduce an original work in it’s original form, they make it easy to mass produce a slightly altered form without any way to identify the original attribution.
If you paid someone to study a million books and write a novel in the style of some other author you have not violated any law. The same is true if you hire an artist to copy another artist’s style. So why is it illegal if an AI does it? Why is it wrong?
I think this is intentionally missing the point.
LLMs don’t actually think, or produce original ideas. If the human artist produces a work that too closely resembles a copyrighted work, then they will be subject to those laws. LLMs are not capable of producing new works, by definition they are 100% derivative. But their methods in doing so intentionally obfuscate attribution and allow anyone to flood a space with works that require actual humans to identify the copyright violations.
Like the other comments say, LLMs (the thing you’re calling AI) don’t think. They aren’t intelligent. If I steal other people’s work and copy pieces of it and distribute it as if I made it, that’s wrong. That’s all LLMs are doing. They aren’t “being inspired” or anything like that. That requires thought. They are copying data and creating outputs based on weights that tell it how and where to put copied material.
I think the largest issue is people hearing the term “AI” and taking it at face value. There’s no intelligence, only an algorithm. It’s a convoluted algorithm that is hard to tell what going on just by looking at it, but it is an algorithm. There are no thoughts, only weights that are trained on data to generate predictable outputs based on given inputs. If I write an algorithm that steals art and reorganizes into unique pieces, that’s still stealing their art.
For a current example, the stuff going on with Marathon is pretty universally agreed upon to be bad and wrong. However, you’re arguing if it was an LLM that copied the artist’s work into their product it would be fine. That doesn’t seem reasonable, does it?
My argument is that the LLM is just a tool. It’s up to the person that used that tool to check for copyright infringement. Not the maker of the tool.
Big company LLMs were trained on hundreds of millions of books. They’re using an algorithm that’s built on that training. To say that their output is somehow a derivative of hundreds of millions of works is true! However, how do you decide the amount you have to pay each author for that output? Because they don’t have to pay for the input; only the distribution matters.
My argument is that is far too diluted to matter. Far too many books were used to train it.
If you train an AI with Stephen King’s works and nothing else then yeah: Maybe you have a copyright argument to make when you distribute the output of that LLM. But even then, probably not because it’s not going to be that identical. It’ll just be similar. You can’t copyright a style.
Having said that, with the right prompt it would be easy to use that Stephen King LLM to violate his copyright. The point I’m making is that until someone actually does use such a prompt no copyright violation has occurred. Even then, until it is distributed publicly it really isn’t anything of consequence.
I run local models. The other day I was writing some code and needed to implement simplex noise, and LLMs are great for writing all the boilerplate stuff. I asked it to do it, and it did alright although I had to modify it to make it actually work because it hallucinated some stuff. I decided to look it up online, and it was practically an exact copy of this, down to identical comments and everything.
It is not too diluted to matter. You just don’t have the knowledge to recognize what it copies.
We’re making the same mistake with AI as we did with cars; not planning human future.
Cars were designed to atrophy muscles, and polluted urban planning and the air.
AI is being designed to atrophy brains, and pollutes the air, the internet, public discourse, and more to come.We should change course towards AI that makes people smarter, not dumber: AI-aided collaborative thinking.
https://www.quora.com/Why-is-it-better-to-work-on-intelligence-augmentation-rather-than-artificial-intelligence/answer/Harri-K-HiltunenTBH, it’s mostly the corporate control and misinformation/hype that’s the problem. And the fact that they can require substantial energy use and are used for such trivial shit. And that that use is actively degrading people’s capacity for critical thinking.
ML in general can be super useful, and is an excellent tool for complex data analysis that can lead to really useful insights…
So yeah, uh… Eat the rich? And the marketing departments. And incorporate emissions into pricing, or regulate them to the point where it only becomes viable to non-trivial use cases.
Idrc about ai or whatever you want to call it. Make it all open source. Make everything an ai produces public domain. Instantly kill every billionaire who’s said the phrase “ai” and redistribute their wealth.
Ya know what? Forget the ai criteria let’s just have this for all billionaires
I’m all for open source
but you know I was totally with you until you said kill all the billionaires
Yeah not everyone has the stomach for class genocide
low-key maybe killing tons of people isn’t a nice idea
There’s loads of people who I don’t mind being killed. No tears for hung Nazis. No remorse for guillotined royals. No problem with dead billionaires. They watch people die for profit, I’d watch them die for liberation.
I’d like to have laws that require AI companies to publicly list their sources/training materials.
I’d like to see laws defining what counts as AI, and then banning advertising non-compliant software and hardware as “AI”.
I’d like to see laws banning the use of generative AI for creating misleading political, social, or legal materials.
My big problems with AI right now, are that we don’t know what info has been scooped up by them. Companies are pushing misleading products as AI, while constantly overstating the capabilities and under-delivering, which will damage the AI industry as a whole. I’d also want to see protections to keep stupid and vulnerable people from believing AI generated content is real. Remember, a few years ago, we had to convince people not to eat tidepods. AI can be a very powerful tool for manipulating the ranks of stupid people.
I would likely have different thoughts on it if I (and others) was able to consent my data into training it, or consent to even have it rather than it just showing up in an unwanted update.
My fantasy is for “everyone” to realize there’s absolutely nothing “intelligent” about current AI. There is no rationalization. It is incapable of understanding & learning.
ChatGPT et al are search engines. That’s it. It’s just a better Google. Useful in certain situations, but pretending it’s “intelligent” is outright harmful. It’s harmful to people who don’t understand that & take its answers at face value. It’s harmful to business owners who buy into the smoke & mirrors. It’s harmful to the future of real AI.
It’s a fad. Like NFTs and Bitcoin. It’ll have its die-hard fans, but we’re already seeing the cracks - it’s absorbed everything humanity’s published online & it still can’t write a list of real book recommendations. Kids using it to “vibe code” are learning how useless it is for real projects.
If we’re talking realm of pure fantasy: destroy it.
I want you to understand this is not AI sentiment as a whole, I understand why the idea is appealing, how it could be useful, and in some ways may seem inevitable.
But a lot of sci-fi doesn’t really address the run up to AI, in fact a lot of it just kind of assumes there’ll be an awakening one day. What we have right now is an unholy, squawking abomination that has been marketed to nefarious ends and never should have been trusted as far as it has. Think real hard about how corporations are pushing the development and not academia.
Put it out of its misery.
I just want my coworkers to stop dumping ai slop in my inbox and expecting me to take it seriously.
I’d like to see it used for medicine.
Im not a fan of AI because I think the premise of analyzing and absorbing work without consent from creators at its core is bullshit.
I also think that AI is another step into government spying in a more efficient manner.
Since AI learns from human content without consent, I think government should figure out how to socialize the profits. (Probably will never happen)
Also they should regulate how data is stored, and ensure to have videos clearly labeled if made from AI.
They also have to be careful and protect victims from revenge porn or general content and make sure people are held accountable.
I think the AI that helps us find/diagnose/treat diseases is great, and the model should be open to all in the medical field (open to everyone I feel would be easily abused by scammers and cause a lot of unnecessary harm - essentially if you can’t validate what it finds you shouldn’t be using it).
I’m not a fan of these next gen IRC chat bots that have companies hammering sites all over the web to siphon up data it shouldn’t be allowed to. And then pushing these boys into EVERYTHING! And like I saw a few mention, if their bots have been trained on unauthorized data sets they should be forced to open source their models for the good of the people (since that is the BS reason openai has been bending and breaking the rules).
That’s what I’d like to see more of, too – Use it to cure fucking cancer already. Make it free to the legit medical institutions, train doctors how to use it. I feel like we’re sitting on a goldmine and all we’re doing with it is stealing other people’s intellectual property and making porn and shitty music.
Firings and jail time.
In lieu of that, high fines and firings.