After a site goes down or gets flagged, the first question is almost always the same: Did someone target me specifically?
In the vast majority of small business cases, the answer is no. Understanding why it actually happened is the starting point for preventing it from happening again.
How Most Attacks Begin
Automated bots scan the internet continuously, probing every IP address they can reach. When they find a website, they run a standard sequence: identify the platform, check for known vulnerabilities in that platform’s version, probe default login paths, test common credentials.
This happens to every site on the internet every day, regardless of size, industry, or how long it has been live. The bot is not making a judgment about whether your site is worth attacking. It is running a sequence and recording what it finds.
The Most Common Entry Points
Outdated plugins and themes: Every plugin vulnerability is publicly documented upon discovery. Bots are updated to probe for those vulnerabilities. A plugin that has not been updated in six months is a site with a known, documented weakness.
Default login paths: WordPress installs at /wp-login.php. Joomla at /administrator/. Drupal at /user/login. Bots check these paths automatically on every site they find running those platforms.
Weak or reused credentials: Credential stuffing uses lists of username-password combinations from other breaches. If an admin account uses a password that appeared in any data breach, that combination is likely in those lists.
Abandoned software: A plugin that was deactivated but not deleted can still be exploited. An old theme sitting in the themes folder is still accessible. Inactive does not mean safe.
Inherited site history: Sites that have changed hands, developers, or hosting providers often carry unresolved vulnerabilities from previous configurations. The current owner may be unaware of what was left in place.
Platform-level build tooling: Large website-building firms often use automated tools to build and maintain sites at scale. When those tools are out of date, every site built or maintained with them carries the same vulnerability. The business owner may not know that is what they are running on. The previous developer may not flag it either. The vulnerability lives at the infrastructure level, not in anything visible during a standard review.
Why Automation Changed the Picture
Five years ago, compromising a site required someone to actively target it, research its vulnerabilities, and attempt an attack. That barrier has dropped significantly.
The tools that run these probes are widely available, inexpensive to operate at scale, and continuously updated to account for newly discovered vulnerabilities. A single operator can run attacks against hundreds of thousands of sites simultaneously with minimal ongoing involvement.
This is the environment your site exists in. The question is not whether it gets probed. It gets probed constantly. The question is whether it is hardened against what the probe finds.
What This Means Practically
The conversation that typically follows a compromise is: how do we prevent this next time. That conversation covers updates, credential policies, login hardening, monitoring, and backup verification. It is not a complicated set of changes. It is a consistent set of changes that most sites have not made.
Post 9 in this series covers what a full security audit looks like for a site that has never had one.
Need a plan? Book a one-hour strategy session and walk away with a clear direction for your website, security, or digital strategy. All sessions are recorded with full transcription. $250 — Book a Strategy Call
Want to get to know me first? Book a free 15-minute intro call. No pitch, just a conversation. Book a 15-Minute Call
Cybersecurity Series
- The Hack I Couldn’t Fix Between Matches
- The Same Tools Powering AI Are Being Used to Attack Your Website
- 7 Signs Your Website May Already Be Compromised
- Why Small Business Websites Get Hacked (And Why It’s Usually Not Personal)