'Someone Bought 30 WordPress Plugins and Planted a Backdoor in All of Them'

Austin Ginder, Anchor Hosting:

Last week, I wrote about catching a supply chain attack on a WordPress plugin called Widget Logic. A trusted name, acquired by a new owner, turned into something malicious. It happened again. This time at a much larger scale.

[…]

The injected code was sophisticated. It fetched spam links, redirects, and fake pages from a command-and-control server. It only showed the spam to Googlebot, making it invisible to site owners. And here is the wildest part. It resolved its C2 domain through an Ethereum smart contract, querying public blockchain RPC endpoints. Traditional domain takedowns would not work because the attacker could update the smart contract to point to a new domain at any time.

[…]

Two supply chain attacks in two weeks. Both followed the same pattern. Buy a trusted plugin with an established install base, inherit the WordPress.org commit access, and inject malicious code. The Flippa listing for Essential Plugin was public. The buyer’s background in SEO and gambling marketing was public. And yet the acquisition sailed through without any review from WordPress.org.

WordPress.org has no mechanism to flag or review plugin ownership transfers. There is no “change of control” notification to users. No additional code review triggered by a new committer. The Plugins Team responded quickly once the attack was discovered. But 8 months passed between the backdoor being planted and being caught.

It’s truly astonishing that WordPress, despite its scale, has such exploitable supply-chain security. I’m aware of a similar npm supply-chain risk with Gobbler, though I am using both Dependabot and Socket.dev to mitigate it.1


  1. I am also reminded of my own brief stint with WordPress in mid-2025 — I was quite excited. However, after four days I was already concerned about its security and installed wpfail2ban.

'Security Analysis of the Official White House iOS App'

This is an interesting, if occasionally alarmist, security analysis from atomic.computer of the White House’s new flagship application.

The major findings:

Finding 1: A Russian-Origin Company Executes Live JavaScript Inside the App (Six Times)
Finding 2: GPS Tracking With No Feature Justification
Finding 3: The Privacy Manifest Is Provably False
Finding 4: OneSignal Can Remotely Toggle Location Tracking and Privacy Consent
Finding 5: The App Strips Privacy Consent Banners
Finding 6: Minimal Security Hardening
Finding 7: Dormant Over-the-Air Code Push
Finding 8: Full Behavioral Intelligence Pipeline

Finding 1 is an absolute embarrassment. Shoddy workmanship of the highest order.

Finding 2 has an important caveat:

Whether this code path is actively enabled at runtime would require network traffic analysis, but the capability is compiled into the app and the always-on location permission is requested.

You shouldn’t be surprised to know that I’m not going to install the app to find out if a location permission prompt is actually presented. So I’ll generously give the benefit of the doubt.

Finding 3 is either a manifest lie or an egregious oversight from the developers. Regardless, how it got through App Review is what puzzles me. There are SDKs in the White House app that require a manifest. It’s astounding to me that Singapore Buses has a more robust Privacy Manifest simply by declaring the use of UserDefaults.

Finding 4 is technically misleading:

These are standard OneSignal SDK features, but the implication is significant: OneSignal’s servers can remotely enable or disable GPS tracking and change whether privacy consent is required, all without an app update, without Apple review, without the user knowing. It’s a light switch for location tracking, and it’s not in the White House’s hands.

OneSignal, published yesterday:

For location to be active in any app using our platform, two separate things must happen, both of which are outside of OneSignal’s control:

  1. The developer must explicitly enable it. […]

  2. The user must grant permission at the operating system level. […]

Finding 5 is unforgivable. (Ironically, it probably makes websites easier to use as I’m quite sick of the cookie consent banners.)

I’ve recently spent a lot of time working on many of the security control issues listed in Finding 6 for Gobbler. Again, it’s not surprising that the White House app ships with such a lax security posture.

Finding 7 isn’t much of a finding. Something exists but isn’t turned on.

Finding 8 isn’t much of a finding, either. This is just what OneSignal does.

My problem with this app is one of trust. And, to be clear, that problem of trust lies with Apple. They have a web of guidelines that should have prevented this app from ever being released. They’ve pitched their brand on user privacy and routinely bust smaller developers for not having just the right entry in their Privacy Manifest.

And yet, here we are, with a White House app that doesn’t declare anything with regards to its data capture practices.

To whom and when do App Review Guidelines apply?

WPFail2Ban

WPFail2Ban

Mind blown 🤯. I’ve only returned to using WordPress for a few days but *WPFail2Ban *is already proving its worth. Just a sample of the logs I’ve been seeing over the last few hours:

Blocked username authentication attempt for admin2 from <ip_address>
Blocked username authentication attempt for maria from <ip_address>
Blocked username authentication attempt for wordpress from <ip_address>
Blocked user enumeration attempt from <ip_address>

It’s possible these authentication attempts were happening while I was using Ghost but I just wasn’t aware of them. Nevertheless, it’s a timely reminder to secure your WordPress site. (I’ve blocked user enumeration, username login, and XMP-RPC, while enabling Passkey-based login.)