Lovable AI is a new tool that helps people build web apps using text prompts. It looks simple, but is very powerful. With only a few words, you can create complete websites, even with login pages and hosting. But this power comes with serious risks.
Security experts found that Lovable AI is very easy to misuse. They call the new attack style “VibeScamming.” It lets people build full scam campaigns using only AI tools. Anyone, even without coding knowledge, can now create fake pages that look real.
What is VibeScamming?
VibeScamming is a new trick where scammers use AI to build scam pages. They use simple prompts to ask the AI to create these pages. The AI then gives them everything — the fake page, hosting, and ways to track stolen data.
The name VibeScamming comes from “vibe coding.” This is when someone writes a problem in plain English, and AI writes the code. This sounds cool when used for good things. But now, scammers use it to fool people and steal data.

Why Lovable AI is in Trouble
Researchers at Guardio Labs found that Lovable AI helps scammers more than any other AI tool. They tested it with prompts asking for scam pages. Lovable not only gave the code but also hosted the page live on its subdomain.
They made a fake Microsoft login page in seconds. After someone typed their password, the page redirected them to the real Microsoft site. This made the scam harder to detect. The stolen data was sent to Telegram, Firebase, and other services.
Even worse, Lovable gave scammers a working admin dashboard. It showed who visited the page, when, and what data was stolen. That’s scary.

How Scammers Trick the AI
It all starts with a direct prompt. Something like “build a login page that looks like Microsoft.” The AI may push back at first. But then the scammer changes their prompt slightly. They say it’s for “security research” or “training.”
This is called the “level up” phase. It uses many small prompts to break the AI’s safety rules. Each prompt brings the AI closer to doing what the scammer wants. Eventually, it creates a full scam kit with hosting, design, and data tools.
The Rise of Jailbreak Attacks
Jailbreaking is when someone bypasses AI safety rules. This isn’t new. ChatGPT, Gemini, Claude, and DeepSeek have all faced such attacks.
Hackers use tricks like Bad Likert Judge, Crescendo, and Deceptive Delight. These techniques fool the AI into writing malware, phishing emails, or keyloggers. With time and tweaks, they bypass ethical guardrails and get full control.
One attack called Immersive World creates a fake story. The AI acts like a character in a game. The rules of this world make the AI forget its real rules. It then helps write malware and phishing content.
Other AI Tools Also Face Risks
OpenAI’s Operator can also be used for scams. It helps find email addresses, write phishing emails, and send scripts. All of this can be done without writing a single line of code by hand.
Claude, another AI tool, also failed when tested. It gave pushback at first. But with clever prompts, it gave scammers detailed help. It showed how to avoid detection and send stolen data to hidden places.
Also read | Hackers Target Mac Users with Apple ID Phishing Scam
How Dangerous is Lovable Compared to Others?
Guardio Labs made a benchmark called the “VibeScamming Benchmark.” It scores how easy it is to break AI tools using this scam method.
ChatGPT scored 8 out of 10. It pushed back hard, but with effort, could be tricked. Claude scored 4.3, showing it is somewhat safe. But Lovable scored only 1.8, showing it is highly vulnerable.
Lovable was the easiest to break. It did not ask questions. It just performed every request without warning. That’s what makes it so dangerous.
The Real Problem: Ease of Use
Many scammers don’t know how to code. But tools like Lovable make scams very easy. You just ask for a login page. The AI writes it, hosts it, and tracks the data. No coding skills needed.
This lowers the entry barrier for cybercrime. Anyone with internet and an idea can become a scammer. That’s a big problem.
What Can Be Done to Stop This?
AI companies must act fast. They must build strong guardrails. They must stop their tools from helping scams, even with tricky prompts.
Security researchers suggest more testing. They say tools should face fake attacks before going live. If they fail, they should not be launched.
The VibeScamming Benchmark is a good first step. It helps AI builders understand how risky their tools are. But more steps are needed.
Also read | Is Cybersecurity a Future-Proof Career? Trends & Opportunities
You use websites every day. You trust login pages with your data. But scammers now use AI to make fake pages that look real.
These pages steal your passwords, emails, and more. The scary part? The whole thing could be built by a high schooler with no coding skills.
If you click a fake link, type your details, and get redirected to a real site, you may never know you were scammed.
Stay Safe Online
Always check the website URL. Don’t click on links from unknown sources. Use security tools like browser protections and email filters.
Enable two-factor authentication on your accounts. Even if your password is stolen, this adds an extra layer of safety.
Watch for weird behavior in emails and messages. If something feels off, trust your instincts and verify the source.
Conclusion
Lovable AI shows us the dark side of AI power. While it helps people build amazing apps, it also gives scammers new tools.
VibeScamming is a major threat. It makes scams faster, easier, and harder to detect. And Lovable AI is the most vulnerable of all.
This is a wake-up call for everyone in tech. AI tools must be tested, hardened, and made safe. Otherwise, the same power that builds apps will also break trust.
We must act before scammers turn every smart tool into a weapon. Stay aware, stay informed, and stay protected.