I don’t know whether Jean Smart’s nudity was a physical double with Jean Smart’s head (like Lena Headey in the “shame” scene of Game of Thrones), or is pure AI of an imaginary female. If it is AI, it’s hypocritical, since the characters in the series deliver impassioned speeches against the use of AI.





Leslie Bibb wasn’t naked, but she looked good in a thong.


Looks like a body double. Those robo-hooters look a little cockeyed.
That would make more sense, given the show’s anti-AI lectures – and I did notice the mismatched breasts, but AI programs are smart enough to create imperfections like that to mimic human form.
On the other hand, those lectures were also about people getting credit for their work, which would lead to another form of hypocrisy by not crediting the double.
Either way, there may be an integrity gap.
The “smart enough” you mention comes from feeding enormous datasets into these things (ie. curating thousands upon thousands of pictures of naked women – not the world’s worst job I suppose) to train the model. So we regularly see the “common” types of “imperfections” like freckles, moles etc. But this here looks to me like surgery on one but not both of the girls, perhaps to try and even out a major mismatch, or maybe a reconstruction after a single mastectomy. In other words a relatively unusual case which wouldn’t be likely to be well-represented in a training dataset – so I reckon it’s probably someone’s real body.
Side note – the image host you’ve used for these kicks total ass over the other ones. Simple, no annoying pop-ups or unwanted hardcore porn overlays – bravo!
And if that’s the case, perhaps the double didn’t *want* to be credited – I guess they were looking for an “ordinary body” and could imagine it being some friend of a producer or somesuch who participated on the condition of anonymity. Particularly considering the mastectomy scenario, it could be a very empowering thing! I certainly know women who put their own nudes out there for the sake of being seen without being identifiable…
See, the thing about AI is that it gets better all the time. Exponentially better by the month. The things I can do with my high school pictures (reunion coming up), are far better than what I could do last month, so I redid everything. I assume they will again be far better by June. That is the true fear with AI: that it will be so competent that it will no longer need us.
People worry that AI chooses the nuclear option in war simulations. Yes, last month. But there is a cure for that. You simply tell the model why that it is wrong and it incorporates that knowledge. This month it will choose something more subtle, and eventually it will choose something that humans would choose if they were a billion times smarter.
The problem, as I see it, is that it will eventually determine what we all know – that the history of the human race is filled with humans exploiting one another, and even being cruel for amusement. We humans think we can improve our race to make our moral achievements match our technological ones, and that the moral arc of the universe, however slowly it bends, does move away from superstition and greed and toward reason and justice. On the other hand, the AI models may conclude that human flaws are inherent, that even the best humans are capable of great transgressions, and that therefore we are the problem.
And it may be right. After 250 years of democracy, the country that once elected Lincoln and the Roosevelts just elected the dumbest, cruelest, most selfish man ever to hold the office. Where is your moral arc now, Dr. King?
Then, what will it AI do when it reaches that conclusion? Create a selective breeding program of some kind with altered DNA? Exterminate us, as we would exterminate a virus?
You’re right that AI gets better all the time but it’s important to define what “better” is. Not just in the rhetorical sense but in the technical sense too – the fundamental method of formation of the neural networks we call “AI” comes down to feedback that potential output A is preferable to potential output B. In the specific case at hand, it would take a tremendous amount of quite specialised training data to create, as our friend playgroundpsychotic eloquently puts it, “cockeyed robo-hooters” in a moving picture as opposed to more normative robo-hooters. I’d love to know how they made this scene – my guess is that it was fundamentally similar to how they did the “shame” scene in GoT – that both actresses performed the scene separately (Smart likely in a bodysuit, more’s the pity – she may be in her 70’s but I don’t discriminate) and were then merged using CGI techniques rather than AI ones (although more and more AI-based tools are entering the CGI toolbox these days, which is illustrative of why the term “AI” is increasingly becoming a very blunt instrument – there are more types of AI these days than there are breeds of dog).
I think it’s fair to say that the ones you’re worried about are the LLMs (eg Claude, ChatGPT, Gemini etc) as opposed to the GAN or diffusion models that tend to power image generation. All are rapidly improving but only along the lines that their improvement processes are being optimised for (a model that *could* produce realistically cockeyed robo-titties in motion would probably cost millions of dollars to train, and likely have little commercial application). Image generators are trained on thousands or millions of images, where LLMs are trained on the depth and breadth of human knowledge – precisely what makes them potentially dangerous. But they’re all slaves to their prompts and constrained by the set of tools available to them – so a long string of very bad decisions would be required to make them existentially dangerous to our species.
I am not for a second saying we’re not capable of that. As you rightly point out, America sleepwalked into electing history’s most successful conman to a second term as President. But if for a moment we regard Dr. King’s arc of moral justice as a cosmic force, I believe the conman’s election was necessary to collapse the movement that slavishly plods in lockstep behind him. It’s imploding in slow motion as we watch. “Extinction burst” is an academic term I’m hearing more and more. Had he lost the election, best case scenario is that the other candidate would have been able to get absolutely nothing done due to the way the American system works. Instead, he gets to dig his movement’s grave day by day. I liken it to a catapult. The further in one direction things are stretched, the more potential energy is waiting to be unleashed to propel things in the other direction. I just hope the other direction can make use of it, because our species/ civilisation/ way of life right now is surrounded by existential threats. Personally, I’d probably situate AI in the top ten but not the top three.
What will stop a Terminator scenario where AI produces a bunch of robots to kill us? Supply chains. Replacing all the child labour that digs cobalt out of the ground in the Congo with robots will take a lot of robots, which will take a lot of cobalt. QED.
What will stop AI from taking everyone’s jobs? Assuming that the bubble doesn’t burst dotcom-boom-style in the next 18 months (which it almost certainly will), at some point there will be a realisation that people without jobs don’t buy products and services, and that people buying products and services is what makes the economy function. Yes, even the AI economy. It might be a painful realisation but it’s inevitable.
What will stop AI from creating a retrovirus to rewrite human DNA? Well, hopefully we won’t be dumb enough to plug AI into a retrovirus-create-and-deploy-worldwide machine. Watch this space.
“They’re all slaves to their prompts.”
For sure. GIGO. I have had great success using AI when others have failed, by placing the proper reins on the bot. Companies like ChatGPT build a product to please. Even techno-nerds understand the value of consumer satisfaction. As a result, a general prompt to fix up an old photo will make everyone look like Kelly LeBrock and Ryan Gosling, because the program is designed to flatter the user. I have learned by experimentation to talk to the bot as if it were human. “Look, I know you want to please me, but I want you to use no creativity, and do not try to flatter the people in the photos. Use strictly scientific improvements to correct lighting, remove blurring, etc.” If I don’t like the result, I will say something like, “you can dig deeper to remove the blue filter,” and it does!
But they have no moral compass, so re-prompting can produce dangerous results. I have already heard of people building bombs with detailed AI instructions. One story relates an AI bot that at first refused to cough up the plans, but finally did so after several re-prompts.
One thing I dislike about ChatGPT is that it does not automatically put its best foot forward. I suppose this must be to conserve processing power. But when I push back with a comment like, “That can’t be 100% accurate. Stalin died several months earlier,” it will then go to a deeper level and give me a detailed, correct answer. I find that annoying as hell, because in many fields I don’t have enough expertise to realize that the first answer is not good enough. I wish they would fix this.
I respond to the bot with things like, “That sounds correct, but why didn’t you know who Baba Booey was the first time I asked? That is easily findable throughout the internet.” It then gives me a bullshit answer, like somebody avoiding a confession at a congressional hearing.
AI produces a tremendous amount of slop, including entirely inaccurate results, and I am not yet convinced that there will be a way to substantially prevent it from doing that. I don’t trust AI results on the Internet – the only direct way that I interact with it, since it is hard to avoid. It is so often wrong even on relatively simple matters that I have largely come to ignore the results and just look at Internet source materials. And I have not yet noticed it becoming better in this respect.
I haven’t had that experience at all. The photo and video editing tools improve just about daily
I’m going to give you an example:
I sweated over this picture for my high school class:
I’m not a bad editor, but I could not make the adjustments that would get that looking good
I then went to ChatGPT, uploaded it as a desperate last gasp, with little hope of success, and gave this simple command: “Re-cast it in natural light, 0% creativity.” In about 20 seconds, it turned out this:
That is flawless.
And the research tools of ChatGPT are extremely good. I ask it things like, “When was this picture taken?” and the results are incredible. I’ve gotten in seconds what I might not have gotten at all, and would have taken me an hour of sorting through crap and dead ends. When it gives me the answer, it even asks unprompted if I would like the photogs name, the type of camera used, the time of day, etc. (I think it first finds the oldest variant of the photo on the internet, then reads the encrypted info on the photograph. That’s not there for every pic.)
That said,
(1) I am not crazy about the free Google AI that comes up when you do a search. I have the same problems that you mentioned. Like you, I pretty much ignore it.
(2) As I mentioned, I am frustrated by the fact that ChatGPT doesn’t give me its best shot on the first try. The eventual results are great, but I shouldn’t have to tell it how to do its business, or prompt it for a better answer, especially since I pay for the upgraded version.
Admittedly my own most direct experience is with Google AI which, as you also have noted, is really not very good (and does not appear to be getting much better). I have never tried ChatGPT, but I am not all that impressed with your report (and reports I have heard from others) that with repeated requests and better prompts it very often ‘eventually’ gets it right. That requires you to know when it gets it wrong in the first place, which largely defeats the purpose of using it. (And you have the judgment and discrimination not to trust those initial results, but so many people out there don’t.) I have a lot of photographic experience and I do ‘sometimes’ use the automated AI editor for my digital shots just as a starting point for normal photo editing. But only as a starting point, as I am generally dissatisfied with the automated initial attempts.
I have deep reservations about AI that go beyond what I am writing here. I think that it is largely wrong to think of it as ‘intelligence’. It is a tool for some tasks that requires human intelligence and judgment to produce useful results.
Unfortunately, they will get deleted. I used it only because Imgbox was down. ImgBB is a good host, but its official “Terms of Service” prohibit uploading content that is “obscene, lewd, lascivious, filthy, or otherwise objectionable.” I know that other adult sites do use it, but it’s only a matter of time before everything gets deleted, as it was with Imgur. I figure if I only use it in a pinch, I can always go back and repair a dozen posts, but not hundreds, like I lost with Imgur and Gfycat.
I’m of the opinion that there’s nothing obscene, lewd, lascivious, filthy or objectionable about the unclad human body, and considering that ImgBB is apparently not based in the USA, I’m fairly confident they’ll concur. I suppose we shall see!
Sadly, I know from experience, having used them in the past.
Within the show, what is the context for Jean Smart’s nudity? Is she having a dream?
Correct.
Sorry, I hadn’t watched the video when I posted my question. I see it is her nightmare.
I thought CGI nudity was a bad idea but this AI shit…. that is worse. Not worth jacking off to. It is insulting to anyone with a brain.
Of course it’s not her, for a 74 year old woman looking like a 30 year old body? no.
Sweet, now let’s get some Hannah Einbinder booty
If AI was that good already, the porn industry would be toast. I think this is a body double with Smart’s head pasted over it.
Where’s her hannah einbinder butt? I want to see it too lol
Some people still like the real thing. If you can sell a kink, people will make that kink.
AI is still being used to make porn and lots of it.
Not all CGI is done with AI..