- | 8:00 am
Companies that use AI to help you cheat at school are thriving on TikTok and Meta
Questionable ads promoting AI were taken down from TikTok after ‘Fast Company’ reached out for comment.
This is the first full academic year where students have access to AI-powered chatbots like ChatGPT. While students around the world may be tempted to deploy such assistants, the tech is still far from perfect: So-called hallucinations remain commonplace in chatbots’ responses, with research suggesting GPT-4 makes up one in five citations.
As a result of chatbots’ unreliability, many essay mills, which produce content for a fee, are touting that they combine both AI and human labor to create an end product that is undetectable by software designed to catch cheating. And, according to a new analysis published in open-access repository arXiv, such mills are soliciting clients on TikTok and Meta platforms—despite the fact that the practice is illegal in a number of countries, including England, Wales, Australia, and New Zealand.
“Platforms like TikTok and Meta are committing a criminal offense by advertising these systems because most of these laws contain explicit criminal provisions about advertising,” says Michael Veale, an associate professor in technology law at University College London and one of the study’s authors.
Veale first noticed the scale of the issue while looking through ad archives provided by TikTok and Meta in response to the EU’s Digital Services Act, which compels big tech companies to be more transparent about who is advertising on their platform.
The ads were bought by a variety of companies offering different degrees of essay-writing services. In all, 11 different AI services were named in the study. Some provided touch-ups to make texts feel more academic, while others offered to write entire essays, cite works, and check for plagiarism using AI. Many of the tools appear to be wrappers that piggyback onto existing LLMs, using custom prompts that obtain the desired results from the chatbots.
A TikTok spokesperson tells Fast Company that the videos highlighted in the research have been removed and the accounts that posted them banned for breaching the app’s advertising policies. “While ads promoting AI apps, like ChatGPT, are permitted in some cases on the platform, we do not allow ads that are misleading or dishonest,” the spokesperson says. Meta did not respond to a request for comment; some ads named in the report remain visible, albeit inactive, on their ad transparency platform.
Veale says that how the platforms respond to knowing these potentially illegal ads exist on their apps is in itself interesting. “These laws are very vague,” he says. “They’re so broad that Meta or TikTok, when they’re told about these illegal services, have to come to a decision on how widely they will enforce these laws.” In theory, the legislation in some countries, as currently written, could outlaw general-purpose AI systems or assistive tools not that far from autocorrect.
The latter would be significant, says one third-party academic not involved in the research. “Both students themselves and writers for contract-cheating providers will use AI,” says Thomas Lancaster, an academic integrity specialist at Imperial College London. The issue is becoming more of a problem as AI becomes more ubiquitous. “AI is just standard technology,” he adds.
And as it becomes more standard, playing whack-a-mole with ads promoting AI-powered services may be a futile game, given the number of services out there—nearly a dozen alone of which were highlighted in this study. “We already know that AI use cannot be reliably and consistently detected,” says Lancaster.