Let’s be honest, that poor excuse for a robotic monkey is a better and more loving patent than what most of us got.
This is an interesting comparison because the wire monkey study suggests that we need physical contact from a caregiver more than nourishment. In the case of AI, we’re getting some sort of mental nourishment from the AI, but no physical contact.
The solution? AI tools integrated into either hyper-realistic humanoid robots, or human robo-puppets.
Or, we could also leverage our advancing technology to support the working class by implementing UBI through a reduction in production costs and an evening out of wealth and resources.
But who wants that? I, a billionaire, sure don’t.
I mean last week it was all over the news that Mattel and OpenAI made a deal to put chatgpt in toys such as Barbie.
Oh freaky! That’s a huge liability though. I don’t see that happening with a model anywhere close to what we’re using in ChatGPT.
Put that shit in a furby or a 1993 toy biz voice bot.
How about just hug a real human. Problem solved
How will they sell a human at the lowest cost? People have to eat and sleep.
slavery was made illegal decades ago
They might not be able to feed your brain
ELIZA, the first chatbot created in the 60s just used to parrot your response back to you:
I’m feeling depressed
Why do you think you’re feeling depressed
It was incredibly basic and the inventor Weizenbaum didn’t think it was particularly interesting but got his secretary to try it and she became addicted. So much so that she asked him to leave the room while she “talked” to it.
She knew it was just repeating what she said back to her in the form of a question but she formed a genuine emotional bond with it.
Now that they’re more sophisticated it really highlights how our idiot brains just want something to talk to whether we know it’s real or not doesn’t really matter.
One of the last posts I read on Reddit was a student in a CompSci class where the professor put a pair of googly eyes on a pencil and said, “I’m Petie the Pencil! I’m not sentient but you think I am because I can say full sentences.” The professor then snapped the pencil in half that made the students gasp.
The point was that humans anamorphize things that seem human, assigning them characteristics that make us bond to things that aren’t real.
That or the professor was stronger than everyone thought
Depends. I think I’m on the autistic spectrum, I just don’t see them as equal, but as tools.
I’m not in the autistic spectrum. They aren’t equals and they are barely tools.
They are good tools for communicating with the robots in management. ChatGPT, please output some corpobullshit to answer this form I was given and have no respect for.
Does anyone know the name of this monkey or experiment? It’s kind of harrowing seeing the expression on its face. It looks desperate for affection to the point of dissociation.
Here’s a video for context https://youtu.be/-Qi7txH1KzY
Absolute horror.
The context makes it even more heartbreaking.
The experiment was done by harry harlow, but I don’t think the name of the monkey was given, could have just been a number :(
Thank you so much. I’ve found a Wikipedia page on him and his research so I’ll give it a read. The poor money. https://en.wikipedia.org/wiki/Harry_Harlow
A colleague is all in on AI. She sends these elaborate notes generated by AI from our transcript that she is so proud of. I really hope she hasn’t read any of them because they’re often quite disconnected from what occurred on the call. If she is reading them and sending them anyway… Wow.
Probably not reading them. A family member told me at their work someone had an LLM summarize an issue spread out over a long email chain and sent the summary to their boss, who had an LLM summarize the summary.
From experience, people who tend to do this wouldn’t understand the issue even if they spent all the time in the world reading the email chain, or attending the meeting.
That’s what gives them a false sense that AI is helping, because AI is as good as themselves with comprehension and then saying plausible things that aren’t real. On the upside, it takes 2 seconds instead of an hour.
Most people don’t read them. It reminds me of back before the AI days when you would have to spend time writing up those email summaries to send the team only that nobody reads them. I proved this to my boss that for 4 weeks straight I embedded that to the first person to email this +email gets $50. I never had to pay out because I was right 4 weeks in a row before I stopped. So many emails and newsletters in companies are done just because it’s just how it’s done for proper communication. Its just mindless busy work that wastes my time.
I didn’t know those were off. About a year ago we were playing with Zoom’s AI meeting recorder and it was astonishing how accurate the summary was. Hell, it could even tell when I was joking, which was a bit eerie.
I’ve not had much of an issue, my guess is her prompts aren’t great or she’s combining it with really poorly taken notes?
I feel more like I’ve got the wire monkey mother from that same experiment.
We love cloth mother, way better than wire mother, gotta say
Where does scrub daddy factor into this?
Damn, wire mother is going dig into my brain.
Yes… very apt comparison.
Cloth AI will love and comfort us until the end of our days.
Which will be soon, because only Wire Computer provides us with actual sustenance.