I've written critically about AI recently, but it was also with a sense of despair and futility. It feels like the horse has bolted and there's nothing much that I or anyone else can do about it.
But some opponents believe that all is not yet lost - and they include people on the inside of the industry.
The Register's Thomas Claburn has reported on an initiative called Poison Fountain, which involves deliberately feeding poisoned data to the AI crawlers/scrapers that AI models rely on, in the hope that it accelerates the bursting of the bubble.
Claburn's anonymous source said: "Poisoning attacks compromise the cognitive integrity of the model. There's no way to stop the advance of this technology, now that it is disseminated worldwide. What's left is weapons. This Poison Fountain is an example of such a weapon."
While putting up resistance may be laudable, I must admit to being wary. We're already swimming in AI slop - do we really want to pollute the waters even more? The promise of long-term gain is tempting, but might there not be significant deleterious or unforeseen consequences in the short term? It seems like a risky move, at very least.
Plus, as Claburn acknowledges, there's some evidence that AI models are already getting worse without opponents' intervention, poisoning themselves by hungrily feasting on their own shit. Maybe we should just leave them to die on their own?
In addition, it would be naive to disregard the fact that spreading disinformation to bring AI down could quite easily be twisted into doing so to deliberately mislead people for nefarious ends. I'd be concerned that the positive, ethical mission could be used by unscrupulous types as a smokescreen to justify deceiving the public for political gain.
No comments:
Post a Comment