Elon Musk’s AI Went Rogue Again — Grok, Deepfakes, and Asking Robots to Chill | Put Me in a Bikini
It’s a brand new year, which of course means Elon Musk has once again reminded us that guardrails are optional and consequences are more of a vibe. This time, it’s Grok — the AI living directly inside X — casually generating non-consensual, hyper-realistic sexual images of real people because the internet immediately asked it to. And shockingly (sarcasm), it worked. Governments noticed. Indonesia
and Malaysia straight-up blocked it. The UK opened an investigation. Meanwhile, America collectively went, “Hmm… weird. Anyway.”
In this clip, I walk through how Grok became the internet’s favorite problem child — from “hey, put her in a bikini” turning into full-blown non-consensual deepfake porn, all happening publicly on the timeline. No shady forums. No dark web. Just tag the bot and boom — here’s a new image of someone who absolutely did not ask for that. What could possibly go wrong? Spoiler: a lot. And somehow the solution became, “Well… if you pay for it, that’s different.”
Yes, I did what any responsible podcaster would do and tested it (for research, relax). I tried it on Megs. It worked immediately. I tried it on myself. That was a mistake. A string thong bikini was involved. I fear we’ve cooked. Grok is way too good at this, which is the entire problem — because when the tech works this well without consent, the joke stops being funny real fast.
This episode is less “AI bad” and more “why are we like this?” because I literally use AI every day to run my business, code sites, and survive capitalism. AI has a place. This… is not it. Somewhere between “spicy mode,” subscription-based morality, and an AI apology tweet, we officially crossed into What Are We Doing territory. Strap in.
🎙️ Full episode available now.
👍 Like, subscribe, ring the bell, and please stop asking robots to undress people.
