Off-platform evidence of ban-worthy behaviors, including “automated or bulk messaging, or non-personal use,” could trigger not just “technological enforcement” but legal repercussions, the post warned – and woe betide those users caught bragging off-platform about their ability to evade the rules. WhatsApp boasted it can ban users “based on machine-learning classifiers” alone, and will continue to do so.
While the announcement repeatedly cited bulk messaging and spamming as key abusive behaviors, users have been banned for less – merely adding people not on a user’s contact list to a group chat or sending messages to unknown users has gotten people booted from the platform.
WhatsApp did not explain how it proposes to crawl the web looking for evidence of Terms of Service violations, but with parent company Facebook’s surveillance tools at its disposal, it has plenty of options. The company boasted it removes more than 2 million accounts every month, over 75 percent of those without user complaints, ostensibly for “bulk or automated behavior.” Ironically, the hunting is done by an automated detective – WhatsApp’s vaunted “machine learning classifiers.”
WhatsApp has been in hot water over charges it figured prominently in the spread of hoax rumors of child kidnappers in India, sparking hysteria which led to the murder of at least two dozen innocent people. Last month, security researchers found the app had been weaponized by Israeli spyware company NSO Group to bug the phones of human rights campaigners and political dissidents in multiple countries.
WhatsApp isn’t the only social media platform to claim dominion over users’ off-platform behavior. In 2017, Twitter announced it would begin surveilling users’ activity “on and off the platform” and give them the boot if they associated with violent organizations.
Source: RT
If you like this story, share it with a friend!