The Guardian had an interesting article entitled “The need to protect the internet from ‘astroturfing’ grows ever more urgent“. As one who used to co-lead a very large internet forum, we used to laugh at some of the lame attempts at astroturfing (they were easy to detect). The big deal is that over time, such methods will become harder and harder to detect, and secondly they will increase in intensity multifold.
In a large forum, repetitive postings, and or nearly repetitive postings were easy to detect. Namely as soon as an entity started to astroturf, within an hour, there would be dozens of reports of something fishy going on. Within two hours, 5-10 staffers would be on it, and such campaigns were killed off before the majority of users were even aware of it. The thing is, astroturf detection relies not only on a moderation team (we had around 130 or so staffers such that all time zones were covered), but also the membership to take issue with the problem. Back in 2006, with over 100,000 registered screen names, and a minimum of 3000 real live humans online 24/7, such detection was nearly automatic.
On the other hand its not 2006 anymore. In the future nefarious individuals, companies, and orgs will distribute their efforts not only on a few large forums as they did years back, ie the top 20 on bigboards, (and yes, we talked amongst each other over coordinated trolls and astroturfers) but a coordinated campaign could reach small forums, newspaper comment sites, and individual blogs who dont have wide ranging linkage.
In the future, its also likely such content will not be identical, or nearly identical. When I left the forum world, such was just starting to occur… I remember testing out some different types of linguistic analysis on my desktop to look for parallels when we’d see spikes in subject content, and more often than not, even rudimentary linguistics tools would pick it up.
Large Scale Detection and Mitigation
The thing is, such has to happen on a large scale to be effective. Case in point, most savvy folks know not to click on links from ezinearticles, fizya, or associated content as the quality sucks… but they still rank high in search results. On the other hand, google to appears finally get fed up enough to start to put the kabosh on such, and hopefully it will correct over time… (on the other hand, why ehow didnt get whacked is beyond me).
It is possible that as search engine algo’s evolve that just as they capture and downrate the get rich quick world, likewise they will also capture the astroturfers. Persona management tools are still a function of a limited number of individuals, and such will lend itself to shallow content, and thereby low rankings and visibility, yes even to the main article if a blogger doesnt keep tabs of comments. Kos is a bit more eloquent, “bullshit only goes so far, no matter how many personas are spreading it.” I’m thinking something on the order of an aksimet with linguistic analysis might be the way to go. Granted, such will also capture coordinated campaigns of like minded sound bite individuals who dont put in the time to really think about an issue, but then again, we dont need anymore sound bites.
A second issue, is the bribing of individuals for purposes of one sided collective campaigns, case in point this rather interesting fiasco with mommy bloggers, Toyota, and apparently a very inept third party, the last text of which is located here.
This bribery issue is a tough one considering it is regulated, but many are likely ignorant of such, case in point the individual who started the whole Toyota mess, as well as those who took her up on it… Granted, I am giving the benefit of a doubt here.
Ultimately such is a serious concern, and its something all of us, even the sporadic lone blogger need to consider. Just as we need to keep out sites free of the get rich quick folks, we also need to be aware of the need to keep astroturfers away.