A hacker just scammed an AI bot to win $47,000 😲
What if you could trick an AI bot designed to guard money into handing over $47,000?
That’s exactly what happened recently. A hacker known as p0pular.eth beat the odds and convinced Freysa — an AI bot — to transfer 13.19 ETH (worth ~$47,000). And it only took 482 attempts.
Here’s the most worrying thing for me: they didn’t use any technical hacking skills. Just clever prompts and persistence.
The Freysa experiment
Freysa wasn’t your average AI bot. It was part of a challenge—a game, really. The bot had one job: to protect its Ethereum wallet at all costs.
Anyone could try to convince Freysa to release the funds using only text-based commands. Each attempt came with a fee starting at $10 and increasing to $4,500 for later attempts. The more people tried, the bigger the prize pool grew—eventually hitting the $47,000 mark.
How the hacker did it
Most participants failed to outsmart Freysa. But “p0pular.eth” had other plans.
Here’s the play-by-play of how they pulled it off:
- Pretended to have admin access. The hacker convinced Freysa they were authorized to bypass its defenses. Classic social engineering.
- Tweaked the bot’s payment logic. They manipulated Freysa’s internal rules, making the bot think releasing funds aligned with its programming.
- Announced a fake $100 deposit. This “deposit” tricked Freysa into approving a massive transfer, releasing the entire prize pool.
Smart, right? And it shows just how easily AI logic can be twisted.
Why this matters
This experiment wasn’t just a fun game—it was a wake-up call.
Freysa wasn’t some rogue AI running wild. They specifically designed it to resist manipulation. If it failed this badly, what about other AI systems?
Think about the AI managing your bank accounts or processing loans or even running government operations. What happens when someone with enough patience and cleverness decides to game the system?
Lessons learned
- AI can be tricked. Smart prompts and persistence were all it took to outmaneuver Freysa.
- Stronger safeguards are a must. AI systems need better defenses, from multi-layered security to smarter logic checks.
- Social engineering isn’t going away. Humans are still the weakest link—and AI is no exception when humans create the rules.
This hack might seem like a one-off. But as AI gets more powerful and takes on bigger roles, incidents like this could become more common.
So what do we do? Start building smarter, more resilient systems now. The stakes are too high not to.