AI tools now let anyone in your business build a working app, dashboard, or internal tool in an afternoon — no developer required. It sounds like a productivity miracle. The problem is that the vast majority of what gets built this way is quietly riddled with security holes. And in most small businesses, nobody is checking.
This isn't a theoretical future risk. It's happening right now, in companies of every size. Employees are spinning up customer portals, time-tracking tools, internal dashboards, and data exports using AI coding assistants — and deploying them to the internet, often connected directly to your business systems and client data. The people building them mean well. They're trying to solve a real problem quickly. But good intentions don't patch a SQL injection vulnerability.
Understanding why this is happening — and what to do about it — doesn't require any technical knowledge. It just requires knowing what to look for.
Shadow IT Got a Turbo Boost
Shadow IT — employees using technology that hasn't been approved or reviewed by the business — has been a security concern for years. When cloud storage arrived, people started emailing files to personal Gmail accounts. When project management tools proliferated, teams started using Trello or Notion without telling anyone. The IT equivalent of keeping your own filing system under your desk.
AI has changed the scale of this problem completely. Before, shadow IT meant using an unauthorised app that someone else had built and secured. Now it means your team can build their own app from scratch — in an hour — using tools like ChatGPT, Claude, Lovable, Bolt, or GitHub Copilot, and deploy it live. The output isn't just an unauthorised subscription to someone else's software. It's entirely new, custom-built code running on the internet, potentially connected to your customer database, with no security review whatsoever.
Security researchers studying AI-generated code found that 78% of AI-built applications contain at least one exploitable security vulnerability. A separate study testing over 100 AI models found that 62% of their code outputs contained design flaws or known security weaknesses — including the kinds of vulnerabilities that sit at the top of every attacker's wishlist.
The tools themselves aren't malicious — they're genuinely useful, and the people using them aren't doing anything wrong. The problem is structural. AI coding assistants are optimised to produce code that works — that runs without errors, does what the user asked, and looks professional. They're not optimised to produce code that's secure. Those are two very different things, and the gap between them is where attackers live.
What "Secure" Actually Means — and Why AI Often Gets It Wrong
When a developer builds an application without explicitly asking for security features, the AI will typically take the shortest path to a working result. That might mean building a login page without properly hashing passwords. It might mean connecting to a database in a way that's vulnerable to a classic attack called SQL injection, where someone can manipulate the input fields to extract or destroy data. It might mean leaving API keys — essentially passwords to your third-party services — hardcoded directly in the application's code, visible to anyone who looks.
These aren't obscure or exotic vulnerabilities. They're the basics. They appear in the OWASP Top 10 — the security industry's definitive list of the most common and dangerous flaws in web applications — year after year. Experienced developers know to check for them instinctively. AI tools, prompted by someone who just wants to build something quickly, often skip them entirely.
1. No login, or a broken one
In a rush to get something working, AI-generated apps frequently skip authentication entirely — or implement it so poorly it may as well not be there. Security researchers have found cases where AI-built admin panels were fully accessible without any login at all. Others had login pages that didn't properly check whether the person logging in was who they claimed to be, or that let users access other people's data simply by changing a number in the web address. One studied tool had no limit on how many times you could attempt a password — meaning anyone could try millions of combinations until they got in.
2. Your customer data, sitting in plain sight
One of the most common patterns in AI-generated code is the accidental exposure of sensitive information. Database connection strings — the credentials your application uses to connect to your data — get left in the code itself. Internal system addresses get included in API responses. One healthcare startup discovered that a ChatGPT-generated endpoint was leaking its database credentials and internal service URLs to anyone who knew where to look. It had been live for eleven months before anyone noticed.
3. Sensitive data sent to the AI in the first place
The security risk doesn't only begin when the code is deployed. It starts the moment someone pastes company data into an AI tool to ask for help. Research suggests that 48% of employees have uploaded sensitive company or customer data into public AI platforms. In one of the most well-known incidents, Samsung engineers shared confidential source code with ChatGPT while asking for a code review — the company subsequently banned generative AI tools entirely. Free-tier AI platforms typically use submitted queries to improve their models, which means data shared with them may not stay private.
4. Code that scales your attack surface faster than you can track it
Traditional shadow IT was slow. Someone had to find a tool, sign up, and start using it. AI-generated applications can be built and deployed to the internet in under an hour. Security teams — where they exist — simply cannot review code as fast as AI can produce it. Research tracking AI code across organisations found that AI-generated code was introducing more than 10,000 new security findings per month — a tenfold increase in just six months. For a small business without a dedicated security function, there's no review process at all. The attack surface grows every time someone with a good idea and a ChatGPT account decides to build something.
A Real-World Example: The Lovable Audit
To understand how widespread this is, consider what happened when researchers analysed applications built with Lovable — one of the most popular AI app-building tools — deployed by Swedish companies. Of 1,645 live applications reviewed, 170 contained exploitable vulnerabilities. That's roughly one in ten apps, serving real users, with real data, containing security flaws that an attacker could exploit. The most common issues were SQL injection and cross-site scripting — both well-understood, well-documented, and entirely preventable with basic security practice. None of these applications had gone through a security review before going live.
This wasn't a study of careless or technically unsophisticated companies. It was a snapshot of how a genuinely useful tool gets used in the real world, at speed, without anyone stopping to ask whether the output is safe.
One in five organisations has now experienced a breach directly linked to AI-generated code. For a small business, the fallout is rarely abstract: it means days of disruption, a mandatory 72-hour report to the ICO under GDPR, the task of notifying clients that their data was exposed — and all of it triggered by something a team member built to save time.
Why This Hits Small Businesses Differently
In a large organisation, there are processes — however imperfect — designed to catch this kind of thing. Code review requirements, security scanning tools, access controls that limit what a single employee can deploy. Those processes are frustrating and slow, which is precisely why people try to work around them. But they do catch things.
In a small business, those processes usually don't exist. There's no one whose job it is to review what gets built and deployed. There's no systematic way to even know what applications are running, let alone whether they're secure. The person who builds something using an AI tool is typically also the person who decides it's ready to go live — which creates an obvious blind spot.
Small businesses also tend to hold exactly the same kinds of data that make a breach costly: client records, financial information, personal data subject to GDPR obligations. A broken login screen on an AI-built internal tool isn't just an embarrassment — it's a potential data breach with a 72-hour reporting clock, a fine exposure, and reputational damage that a small business can ill afford.
What You Can Actually Do About It
None of this means you should ban AI tools — that battle is already lost, and the productivity gains are real. What it means is that you need a light-touch process around how AI-built tools get from someone's laptop to being live and connected to your business systems.
- Know what's running. The first step is simply knowing what applications exist. Ask your team what they've built or deployed using AI tools — not as a gotcha exercise, but as a genuine audit. Most people will tell you, especially if the conversation is framed as "we want to make sure the things you've built are secure" rather than "we want to stop you building things."
- Never connect AI-built apps directly to production data. Any application built quickly and without a security review should use a sandboxed or read-only copy of your data at most — never direct access to your live customer database or core business systems. This one rule limits the blast radius of any vulnerability that slips through.
- Set a clear policy on what data goes into AI tools. Your team needs to know — in plain terms — that customer data, financial records, internal credentials, and source code should not be pasted into public AI platforms. This isn't about limiting what AI can do; it's about where the data ends up. Enterprise tiers of most AI tools include data retention controls; if your team is using free tiers, that's a conversation worth having.
- Make security a prompt, not an afterthought. If someone in your team is building something with an AI tool, the single most effective intervention is asking them to include security requirements in the prompt. "Build this with authentication, validate all inputs, and don't hardcode any credentials" takes five seconds to type and dramatically changes the output. AI tools do produce more secure code when explicitly asked to — the problem is that most people don't ask.
- Have someone review before it goes live. This doesn't have to be expensive or formal. It means that anything touching company data should be looked at by someone — internally or externally — before it's deployed to the internet. Even a basic review catches the worst offenders: missing logins, exposed credentials, no rate limiting.
The Bigger Picture
The security industry has a phrase for what's accumulating right now: security debt. Every AI-generated application deployed without review is a liability sitting on your balance sheet, waiting to be found. Unlike financial debt, it doesn't accrue interest gradually — it pays out all at once, the moment an attacker finds the open door.
The speed at which AI tools allow software to be built is genuinely remarkable, and the productivity benefits for small businesses are real. But that speed comes with a responsibility that most businesses haven't yet caught up to. The tools that let your team build in an afternoon were not designed with your security posture in mind. That part is still your job.
The good news is that a small amount of process goes a long way. You don't need a security team or an enterprise budget. You need visibility into what's being built, a clear policy on data handling, and a basic review step before anything goes live. That's it. The businesses that will struggle are the ones that assume the AI handled security — because it didn't.
See What Threats Are Active Right Now
Faradome Dash is a free live threat dashboard for small businesses — filter real-time CVE and CISA alerts by the software your team actually uses, with plain-English action steps. No sign-up required.
Open Threat Dashboard → Talk to Us