Excerpt from a message I just posted in a #diaspora team internal forum category. The context here is that I recently get pinged by slowness/load spikes on the diaspora* project web infrastructure (Discourse, Wiki, the project website, ...), and looking at the traffic logs makes me impressively angry.
In the last 60 days, the diaspora* web assets received 11.3 million requests. That equals to 2.19 req/s - which honestly isn't that much. I mean, it's more than your average personal blog, but nothing that my infrastructure shouldn't be able to handle.
However, here's what's grinding my fucking gears. Looking at the top user agent statistics, there are the leaders:
- 2.78 million requests - or 24.6% of all traffic - is coming from
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.2; +https://openai.com/gptbot)
. - 1.69 million reuqests - 14.9% -
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/600.2.5 (KHTML, like Gecko) Version/8.0.2 Safari/600.2.5 (Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot)
- 0.49m req - 4.3% -
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)
- 0.25m req - 2.2% -
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot) Chrome/119.0.6045.214 Safari/537.36
- 0.22m req - 2.2% -
meta-externalagent/1.1 (+https://developers.facebook.com/docs/sharing/webmasters/crawler)
and the list goes on like this. Summing up the top UA groups, it looks like my server is doing 70% of all its work for these fucking LLM training bots that don't to anything except for crawling the fucking internet over and over again.
Oh, and of course, they don't just crawl a page once and then move on. Oh, no, they come back every 6 hours because lol why not. They also don't give a single flying fuck about robots.txt
, because why should they. And the best thing of all: they crawl the stupidest pages possible. Recently, both ChatGPT and Amazon were - at the same time - crawling the entire edit history of the wiki. And I mean that - they indexed every single diff on every page for every change ever made. Frequently with spikes of more than 10req/s. Of course, this made MediaWiki and my database server very unhappy, causing load spikes, and effective downtime/slowness for the human users.
If you try to rate-limit them, they'll just switch to other IPs all the time. If you try to block them by User Agent string, they'll just switch to a non-bot UA string (no, really). This is literally a DDoS on the entire internet.
Just for context, here's how sane bots behave - or, in this case, classic search engine bots:
- 16.6k requests - 0.14% -
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
- 15,9k req - 0.14% -
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) Chrome/116.0.1938.76 Safari/537.36
Because those bots realize that there's no point in crawling the same stupid shit over and over again.
I am so tired.
mögen das
Hans Wurst
Als Antwort auf Dennis Schubert • • •Kent Paint
Als Antwort auf Dennis Schubert • • •°Anthepa mag das.
Andreas G
Als Antwort auf Dennis Schubert • • •I wish there was some way to automagically harm them somehow.
Like detect them and send them to sniff something that wrecks their training data, like a list of random-generated nonsense words or something.
Dennis Schubert
Als Antwort auf Dennis Schubert • • •Hans Wurst mag das.
Andreas G
Als Antwort auf Dennis Schubert • • •Dennis Schubert
Als Antwort auf Dennis Schubert • • •Yes. I plan to redirect them to a randomly generated text based on some LLM-generated text snippets that contain absolute nonsense (but also isn't static, so it looks slightly differently each time the page is loaded).
Sadly, I need to finish some ongoing infrastructure restructuring before I can deploy that across everything I host.
mögen das
Andreas G, redj 18, Montag, pascal macaigne, Martin Ruskov, Rosenrot und °Anthepa mögen das.
Martin Ruskov
Als Antwort auf Dennis Schubert • • •Apologies for cross-posting, but I thought you might not be seeing them over there.
LLM training bots are a plague - awful.systems
awful.systemskhobo4ka
Als Antwort auf Dennis Schubert • • •Apologies for an angry cross-post mastodon.social/@khobochka/113…
There are some suggestions there, ranging from easier to more complex than redirecting to generated content, e.g. redirecting to Hetzner's speedtest file, setting up tarpits for bots and a few block lists.
There's also a list of AI fuckery here, a few hops from which people confirm that 20-75% (depending on amount of content on the site) of traffic is LLM crawlers and they outrank the usual Wordpress attacks during on-calls.
None of this, however, in any way compensates the sad reality that this would eat away your time and compute, simply because the LLM training infra exists, this makes me absolutely livid.
GitHub - msigley/PHP-HTTP-Tarpit: Confuse and waste bot scanners time.
GitHubmögen das
Rosenrot, Montag und Andreas G mögen das.
Tek aEvl
Als Antwort auf Dennis Schubert • • •Thanatos
Als Antwort auf Dennis Schubert • • •Sorry to read this. Hope there's a way to mitigate this.
I have to be cynical, but I find it pretty rich that these companies, as well as the techbro scene in general, will espouse meritocracy on every end, yet they will happily syphon the work and hobby time of open source and Fediverse enthusiasts without explicitly giving back. But maybe I'm also naive and don't realise that they support Fediverse infrastructure/coding. Correct me if I'm wrong.
Tek aEvl mag das.
Andreas G
Als Antwort auf Dennis Schubert • • •Tek aEvl mag das.
utopiArte
Als Antwort auf Dennis Schubert • • •Not as a specific request for you @denschub yet in general I'm just wondering if German laws could apply to this part. In a complete different context I was hinted to Vertragsfreiheit and stumbled over the following "detail":
wiipedia Vertragsfreiheit hat geschrieben:
Even tho here hasn't been any kind of contract with those bots in the first place, robots.txt is some kind of general agreement on best practice and "gute Sitten". And some how "Täuschung" could be considered too. Also prejudice as the waste energy you have to pay.
All this is just another "be fast and break things" attitude by those who can, have the money and the back up by their all powerful government.
So, I wonder if some lawyers in the EU could blow a really big hole into the casket of these ships to sink them and establish an example.
btw
These bots are collecting data and information about those who tried to escape them, I wonder what the mother of John Conner would think about that.
Tek aEvl mag das.
Dennis Schubert
Als Antwort auf Dennis Schubert • • •Tek aEvl mag das.
utopiArte
Als Antwort auf Dennis Schubert • • •Dennis Schubert
Als Antwort auf Dennis Schubert • • •for some reason, this post went semi-viral on mastodon and hackernews, and I now have a fun combination of
I love the internet.
mögen das
°Anthepa, Tek aEvl, khobo4ka, pascal macaigne, Montag, Lapo Luchini, Benjamin Neff, utopiArte und Thorsten C mögen das.
Tek aEvl
Als Antwort auf Dennis Schubert • • •@Dennis Schubert
Um, what did you do? I would love my bandwidth back, lol
juliadream
Als Antwort auf Dennis Schubert • • •Tek aEvl mag das.
Dennis Schubert
Als Antwort auf Dennis Schubert • • •I turned off the public history pages.
mögen das
Tek aEvl und know mögen das.
Tek aEvl
Als Antwort auf Dennis Schubert • • •Andreas G
Als Antwort auf Dennis Schubert • • •Hang in there.
utopiArte
Als Antwort auf Andreas G • • •offtopic
@Andreas G
Why shouldn't people consider, discuss and react to important up to date evolution in their environment?
Especially in such a crucial thing like LLM right now and the effect on a community like ours?
Dennis proved and published a very important matter, not unexpected, all the contrary, but he investigated and published it first, so the reaction is normal and healthy.
What we witness is just the hive mind in action.
What's your point by insulting people who try to check and "wrap their minds around something"?
utopiArte
Als Antwort auf Dennis Schubert • • •Und so sieht dann das Ergebnis einer KI suche über euer Profil aus:
loma.ml/display/373ebf56-4667-…
Zitat der psychologischen Analyse über den Nutzer:
""Dies koennte zu einer kritischen Haltung gegenueber propietaeren Systemen fuehren."
Matthias
2025-01-05 08:02:47
Andreas G
Als Antwort auf Dennis Schubert • • •People giving unsolicited advice are not being helpful, just annoying.
They are a scourge.
utopiArte
Als Antwort auf Andreas G • • •@Andreas G > viral offtopic
It's the the brainstorming mode of an interconnected internet social being species called mono sapiens.
Take it or leave it ..
youtube.com/watch?v=JVk26rurvL…
btw
At the end of this 14 year old take Kruse reefers to semantic understanding, I guess that's exactly LLM and the big brother event we are right now. And that's why people in our free web are going crazy leading to the viral reaction Dennis described.
btw btw
Looks like Dennis went viral in the activitPub space thx to friendica ..
:)
Dennis Schubert
Als Antwort auf Dennis Schubert • • •no. it was primarily someone taking a screenshot and posting it. someone who took a screenshot of.. diaspora. while being logged into their geraspora account.
but of course it's a friendica user who also sees nothing wrong about posting unsolicited advice who is making wrong claims.
utopiArte mag das.
utopiArte
Als Antwort auf Dennis Schubert • • •@Dennis Schubert /offtopic viral
@denschub
I stumbled over it on a post from a friendica account on a mastodon account of mine.
👍
Do you refer to someting I wrote in this post of yours?
If so and you point me to it I could learn about what for you is a unsolicited advice and could try to prevent doing that in the future.
Andreas G
Als Antwort auf Dennis Schubert • • •@utopiArte Your grasp of human psychology, internet culture, and science in general, is weak.
Consider staying off the internet.
(How'd you like that unsolicited advice?)
juliadream
Als Antwort auf Dennis Schubert • • •Andreas G
Als Antwort auf Dennis Schubert • • •I am sated, honest.
But some people exist in a mode of constant omnidirectional condescension, like little almighties, looking down in all directions.
Mostly lost causes. Deflating their egos sometimes helps, but usually just makes them worse.
Dennis Schubert
Als Antwort auf Dennis Schubert • • •mögen das
Andreas G und utopiArte mögen das.
Rasmus Fuhse
Als Antwort auf Dennis Schubert • • •utopiArte mag das.
Daniel Doubet
Als Antwort auf Dennis Schubert • • •GitHub - mitchellkrogza/nginx-ultimate-bad-bot-blocker: Nginx Block Bad Bots, Spam Referrer Blocker, Vulnerability Scanners, User-Agents, Malware, Adware, Ransomware, Malicious Sites, with anti-DDOS, Wordpress Theme Detector Blocking and Fail2Ban Jail for
GitHubThorsten C
Als Antwort auf Dennis Schubert • • •Dennis Schubert
Als Antwort auf Dennis Schubert • • •Andreas G
Als Antwort auf Dennis Schubert • • •The silly way they crawl it makes me think this is a general thing happening to every service on the web.
Is there a way to find out/compare whether the crawlers are trying to target specific kinds of things?
Dennis Schubert
Als Antwort auf Dennis Schubert • • •so, I should provide some more context to that, I guess. my web server setup isn't "small" by most casual hosters definition. the total traffic usually is always above 5req/s, and this is not an issue for me.
also, crawlers are everywhere. not just those that I mentioned, but also search engine crawlers, and others. a big chunk of my traffic is actually from "bytespider", which is the LLM training bot from the TikTok company. It wasn't mentioned in this post, because although they make a lot of traffic (in terms of traffic size), that's primarily because they also ingest images, and their request volume is generally low.
some spiders are more aggressive than others. a long, long time ago - I've seen a crawler try to enumerate diaspora*s numeric post IDs to crawl everything, but cases like this are rare.
in this case, what made me angry was the fact that they were specifically crawling the edit history of the diaspora wiki. that's odd, because search engines don't care about old content generally. it was also odd, because the request volume was so high, it caused actual issues. MediaWiki isn't the best performance-wise, and especially the history pages are really, really slow. and if you have a crawler firing requests are multiple requests per second, this is bad - and noteworthy.
I've talked privately to others with affected web properties, and it indeed looks like some of those companies have "generic web crawlers", but also specific ones for certain types of software. MediaWiki is frequently affected, and so are phpBB/smf forums, apparently. those crawlers seem to be way more aggressive than their "generic web" counterparts - which might actually just be a bug, who knows.
a few people here, on mastodon, and on other places, have made "suggestions". I've ignored all of them, and I'll continue to ignore all of them. first, suggesting blocking user agent strings or IPs is not a feasible solution, which should be evident to everyone who read my initial post.
I'm also no a huge fan of the idea to feed them trash-content. while there are ways to make that work in a scalable and sustainable fashion, the majority of suggestions I got were along the lines of "use an LLM to generate trash content and feed it to them". this is, sorry for the phrase, quite stupid. I'm not going to engage in a pissing contest with LLM-companies about who can waste more electricity and effort. ultimately, all you do by feeding them trash content is to make stuff slightly more inconvenient - there are easy ways to detect that and to get around that.
for people who post stuff to the internet and who are concerned that their content will be used to train LLMs, I only have one suggestion: use platforms that allow you to distribute content non-publicly, and carefully pick who you share content with. I've gotten a lot of hate a few years ago for categorically rejecting a diaspora feature that would implement a "this post should be visible to every diaspora user, but not search engines" feature, and while that post was written before the age of shitty LLMs, the core remains true: if your stuff is publicly on the internet, there's little you can do. the best thing you can do is be politically engaged and push for clear legislative regulation.
for people who host their own stuff, I also can only offer generic advice. set up rate limits (although be careful, rate limits can easily hurt real users, which is why the wiki had super relaxed rate limits previously). and the biggest advice: don't host things. you'll always be exposed to some kind of abuse - if it's not LLM training bots, it's some chinese or russian botnet trying to DDoS or crawl for vulnerabilities, or some spammer network who want to post viagra ads on your services.
Why 'unlimited limited' posts are not a thing on diaspora* - Dennis Schubert
overengineer.devmögen das
Hans Wurst, Montag, khobo4ka, know, Andreas G und prealpinux mögen das.
Dennis Schubert
Als Antwort auf Dennis Schubert • • •utopiArte mag das nicht.
utopiArte
Als Antwort auf Dennis Schubert • • •cleanmurky
Als Antwort auf Dennis Schubert • • •for people who post stuff to the internet and who are concerned that their content will be used to train LLMs, I only have one suggestion: use platforms that allow you to distribute content non-publicly, and carefully pick who you share content with.
and thanks @Dennis Schubert