- | 8:00 am
AI is still a double-edged sword in the classroom
AI can play a genuine role in helping both students and teachers. But it also can be used to cheat the system.
AI goes back to school
Students around the country are returning to school, meaning educators need to once again wrestle with what place generative AI tools should have in the classroom.
While experts have long debated the role digital technology should have in schools, AI in education can be a particularly thorny issue. For example, ChatGPT and Claude can help students learn new concepts, conduct research, and brainstorm project ideas—but, even when AI is banned from school networks, there’s nothing to prevent students from using AI to do their homework for them. Recent research suggests accurately detecting whether an assignment was done by hand or by a bot still isn’t easy, and even when AI is simply used as a study aid, there’s still the risk that it will hallucinate misinformation or get confused about basic math.
Teachers are using AI themselves to assist with paperwork, lesson planning, class discussions, even potentially to get advice from their favorite chatbots on how to design assignments that students can’t delegate to AI. Those assignments can focus on the craft of writing prose (or even code), perhaps requiring students to show their work through Track Changes features. Some teachers also emphasize in-class assessments, where they can ensure students don’t have access to digital assistants. Prominent EdTech providers such as Khan Academy and Blackboard have built out AI tools specifically designed to aid educators in creating lessons and quiz questions that ideally free up more time for engaging with students and carefully reviewing their work.
Still, a Pew Research Center survey released in May found that only 6% of teachers believed AI does more good than harm, meaning that for many educators, the technology provides yet another anxiety in an educational system simultaneously wrestling with the ramifications of COVID-19, climate change, and shifts in enrollment.
OpenAI cofounder pursues “Safe Superintelligence”
A new AI startup called Safe Superintelligence, led by OpenAI cofounder and former chief scientist Ilya Sutskever, has raised $1 billion, Reuters reported Wednesday.
Backed by big VC firms including Andreessen Horowitz and Sequoia Capital, the company looks to develop AI that’s potentially smarter than humans but still safe for human civilization. Right now, Safe Superintelligence has just 10 employees and is reportedly vetting new hires for technical chops and “good character.” Sutskever had previously worked on similar AI safety issues at OpenAI.
“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the company states on its stylishly minimalist website. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace.”
Sutskever departed OpenAI in May, months after the well-publicized breach of the peace between the company’s board and CEO Sam Altman. Sutskever was among the board members who initially voted to oust Altman and, reportedly, was the one who broke the news to Altman that he had been fired. Altman was quickly hired back, and Sutskever and a number of other high-profile employees have since left the company at a time when jobs and investment funds for AI experts are far from scarce.
Even if AI doesn’t surpass human intelligence anytime soon, the safety and job security aspects of the technology are still a matter of growing concern for many and is even the subject of an Oprah Winfrey special next week.
Nonprofit promotes “human flourishing in the age of AI”
A nonprofit called the Cosmos Institute is “dedicated to promoting human flourishing in the age of AI,” according to a Wednesday blog post. It’s intended to be an alternative to AI pessimism, which focuses on preventing the technology from destroying humanity, and AI accelerationism, which the group argues focuses too much on AI as an end in itself rather than a means of helping humanity.
More concretely, the organization—cofounded and chaired by entrepreneur and former Department of Defense AI leader Brendan McCord—plans to support a new Human-Centered AI Lab at the University of Oxford, along with a fellowship backing researchers who integrate AI and philosophy, and a Cosmos Ventures grant program to support those “at the intersection of AI and human flourishing.”
The organization also supports educational initiatives around AI and philosophy, including an Oxford seminar on AI and philosophy, according to the colorful blog post, which mixes references to thinkers like John Stuart Mill and Aristotle with the famous It’s Always Sunny in Philadelphia conspiracy theory meme.
The “founding fellows” of the Cosmos Institute also include Anthropic cofounder Jack Clark and Tyler Cowen, the George Mason University economist known for several best-selling books and his long-running blog, Marginal Revolution. The Cosmos Ventures program is modeled after Cowen’s Emergent Ventures fellowship and grants, which back entrepreneurs with the potential to improve society.
The organization has a general rationalist/libertarian vibe, arguing for a philosophy-to-code pipeline in order to “embed crucial concepts such as reason, decentralization, and autonomy into the planetary-scale AI systems that will shape our future.”
Fellowships were awarded to researchers with interests in computer science and philosophy and backgrounds at companies like Apple, LinkedIn, and AI and computing startups. And the organization’s blog post suggests its leaders have more projects in the works designed to preserve human autonomy in the age of AI. “Picture late nights, endless espresso, and walls covered with ideas,” the blog post says.