$20 for a chatgpt pro account and fractions of pennies to run a bot server. It’s really extremely cheap to do this.
openAI has checks for this type of thing. They limit number of requests per hour with the regular $20 subscription
you’d have to use the API and that comes at a cost per request, depending on which model you are using. it can get expensive very quickly depending on what scale of bot manipulation you are going for
yes of course there are many different data points you can use. along with complex math you can also feed a lot of these data points in machine learning models and get useful systems that can perhaps red flag certain accounts and then have processes with more scrutiny that require more resources (such as a human reviewing)
websites like chess.com do similar things to find cheaters. and they (along with lichess) have put out some interesting material going over some of what their process looks like
here i have two things. one is that lichess, which is mostly developed and maintained by a single individual, is able to maintain an effective anti-cheat system. so I don’t think it’s impossible that lemmy is able to accomplish these types of heuristics and behavioral tracking
the second thing is that these new AIs are really good. it’s not just the text, but the items you mentioned. for example I train a machine learning model and then a separate LLM on all of reddit’s history. the first model is meant to try and emulate all of the “normal” human flags. make it so it posts at hours that would match the trends. vary the sentiments in a natural way. etc. post at not random intervals of time but intervals of time that looks like a natural distribution, etc. the model will find patterns that we can’t imagine and use those to blend in
so you not only spread the content you want (whether it’s subtle product promotion or nation-state propaganda) but you have a separate model trained to disguise that text as something real
that’s the issue it’s not just the text but if you really want to do this right (and people with $$$ have that incentive) as of right now it’s virtually impossible to prevent a motivated actor from doing this. and we are starting to see this with lichess and chess.com.
the next generation of cheaters aren’t just using chess engines like Stockfish, but AIs trained to play like humans. it’s becoming increasingly difficult.
the only reason it hasn’t completely taken over the platform is because it’s expensive. you need a lot of computing power to do this effectively. and most people don’t have the resources or the technical ability to make this happen.