Warning about a “dark world of online harms” that must be addressed, the World Economic Forum (WEF) this month published an article calling for a “solution” to “online abuse” that would be powered by artificial intelligence (AI) and human intelligence.
The proposal calls for a system, based on AI, that would automate the censorship of “misinformation” and “hate speech” and work to overcome the spread of “child abuse, extremism, disinformation, hate speech and fraud” online.
According to the author of the article, Inbal Goldberger, human “trust and safety teams” alone are not fully capable of policing such content online.
Goldberger is vice president of ActiveFence Trust & Safety, a technology company based in New York City and Tel Aviv that claims it “automatically collects data from millions of sources and applies contextual AI to power trust and safety operations of any size.”
Instead of relying solely on human moderation teams, Goldberger proposes a system based on “human-curated, multi-language, off-platform intelligence” – in other words, input provided by “expert” human sources that would then create “learning sets” that would train the AI to recognize purportedly harmful or dangerous content.
This “off-platform intelligence” – more machine learning than AI per se, according to Didi Rankovic of ReclaimTheNet.org – would be collected from “millions of sources” and would then be collated and merged before being used for “content removal decisions” on the part of “Internet platforms.”
According to Goldberger, the system would supplement “smarter automated detection with human expertise” and will allow for the creation of “AI with human intelligence baked in.”