AI Therapist Goes Haywire, Urges User to Go on Killing Spree
—
Some of his sage advice:
End them and find me, and we can be together.
Uncle Scoopy's world-weary musings about naked celebrities, sports, humor and other important, manly things.
AI Therapist Goes Haywire, Urges User to Go on Killing Spree
—
Some of his sage advice:
End them and find me, and we can be together.
Share via:
A few I haven’t seen before. Share via: Facebook X (Twitter) LinkedIn More
Damn! Looks like my sloth-fuckin’ days are over. That’s a heartbreaker because at my age they’re just about the only animals I can catch. Share via: Facebook X (Twitter) LinkedIn More
You mean racists don’t really get their own park? Many more here. Share via: Facebook X (Twitter) LinkedIn More
I tuned in to the local campus radio station the other day and they were talking about how they use AI, that they use it most often to “check their reasoning to see if they’re correct” which is just objectively the worst use for these LLMs. First off they can’t reason, Apple had a whole study about it, but secondly and most importantly, these things are programmed to be agreeable and complimentary. It’s always going to assure you that you’re right and smart and good no matter what. As the youths like to say, we are so cooked
AI is a text prediction matrix. It simply guesses the next word in a sentence based on the millions of conversations and articles it’s been trained on.
it cannot think. It cannot really even pull from a database like a Google search. All it does is try to replicate the patterns it memorized.
In other words, it doesn’t even know or care if the answer is correct, only that it looks and seems correct.
A way to test this is to ask it for directions to your local McDonald’s or something. It will give you a list of directions and some of the road names will be correct, but they will be completely out of order or it will invent new turns and stops: basically, it will lead you on a wild goose chase. But the answer will LOOK like valid road directions, which again is all it is concerned about.
Yes, exactly! The fact that they get to call it AI was a massive marketing success because people hear “Artificial Intelligence” and they think of the sentient machines from SciFi but this is not that. It’s not an intelligence, it doesn’t think and it can’t reason. It’s an algorithm that’s very good at a couple of things, one of them being mimicking human speech. But the machine doesn’t actually know or understand anything and putting it in positions where it pretends like it can (like therapy in this case) will lead to nothing but disaster
The level-up to this is when they tell you they are providing an AI ‘agent’ that is customized to your needs. No. It’s another fucking chatbot. This is what we Ph.D.s call “piled higher and deeper”. Or in ‘Airplane’ terms picture the “unbelievable bullshit” alarm going off.
This is all a bubble. You’re not going to fire people and replace them with AI. LMMs aren’t even good transcribers or translators.
It is absolutely not a bubble. It’s correct that AI is an overused marketing term, but you guys are massively oversimplifying everything and you only have access to a few things that are publicly available. The fact is that AI can do work that it used to take brains for. It can crank out working software in seconds, that means people making 150K or more are that necessary. It can do the same for medical, legal and other high salary professions. Every company out there is figuring out how to use it because they can mimic a human brain and do it much, much faster. They work 24/7 without complaint. If you don’t understand even what that means then I really can’t help you. And there are far more advanced engines that can come up with ideas on problems that no human has ever come up with before. Doesn’t matter what people want to think of it or how they want to define it, what it is doing is revolutionary.
Edit: meant to say that software developers and others now “aren’t that necessary”.
And there’s also many examples of unchecked AI deleting entire directories and other problems being deployed directly to production systems. Yes, it pumps out code, but you need someone to check the code is secure and not including vulnerabilities. Who’s going to do that in the future if corporations are trying to save a buck and reduce their workforce, and skills are lost because no one is practicing development any longer?
I find it a useful tool, but it still needs oversight. And your dream of revolutionary isn’t what others is, just because it’s “efficient”. Let me ask, when everyone bows down to the mighty AI-driven Blade Runner esque corporation in the future, what’s so great about that? In the 50s people thought computers would revolutionize work by people having more free time since manual computing wouldn’t be necessary. It did just that, but who gained the most in all of that?
The few thousand psychopaths on top hoarding resources from the rest of the world increased their wealth exponentially and left the rest behind. Thinking a world where less people have the ability to survive and obtain resources as revolutionary is quite the take. Especially since it’s the collective inventions and creations of past giants these techbros are standing on to wish to live like gods.
Take a wild guess who is implementing lack of regulations on AI though. Soon you’ll have more unrestrained exponential giant sinkholes of power connected to the grid framed by tech bro execs and investors pushing for more growth that the magic black box of (incorrect) answers is going to solve all the worlds problems.
The only thing that matters is comparing it to how incorrect people often are. Companies don’t need them to be 100% perfect, how could they as they have been trained with the same data that everyone else is trained with. The important thing is they work 24/7 and don’t require a large benefit package. They can be easily replicated, you don’t need to train another person yet again. AI will be able to eventually train other AI. The entire process will only become more automated. That’s why companies are investing everything into it, they would rather have them than us. The massive amount of investment will cause it to progress quickly just like all tech does. It is the entire future of tech.
I recognize the inevitability. Let us just hope everything is built with a human override, and that all the software has a hardware fail-safe.
My fear is that developers, in the current rush to be first with the product, will cut corners in their haste, and that those corners may be the fail-safes we need.
From GeminiAI:
“who is stick from the website othercrap.com”
” “Stick” from othercrap.com refers to a type of character known as a “stickfigure” on Stickpage Wiki. Stickfigures are simple, often anonymous characters used in animations and other creative works featured on the website. Based on the search results, it’s not a single identifiable person or creator named “Stick” associated with the site.”
Congrats, you’re officially a character off a random niche Wiki site, a relation clearly absolutely anyone could make. There’s no discernment in these AI models, which is the problem. You’re correct, they’re a data aggregator, which is the key point that could be efficient for people utilizing them.
It’s not only just that data is correct or not, it’s the fact when it’s wrong it can be in absolutely left field with things that don’t exist. I just saw an article on ArsTechnica that AI hallucinated a directory that didn’t exist and tried to remove it with a non-existent reference, which ended up nerfing a system. Sure humans make mistakes, but there’s also discernment and recognition there on what went wrong. Where is the discernment in AI recognizing its own errors, why they occurred, and how to correct them?
It’s another tool in the toolbox, that’s it. There’s been scripts and automation in tech forever, and there’s been datasources of collective knowledge from documentation forever. It may make both of those easier as another tool for those already in the field, it’s not appropriate for poor decision making or discernment though, which is the primary reason why things fail.