- | 9:00 am
How LLMs are changing not just how professionals work, but how they think
In the wake of rising penalties and plagiarism accusations, people are changing their approach to workplace and academic writing.
While the rise of AI has transformed industries and workplaces around the world, it has also created a sense of mistrust among organizations and institutions that still hope to maintain a human-led approach and avoid AI-generated content.
This has led to the implementation of checks and rules to ensure AI-free content, whether in certain organizations or educational institutions. It has also popularized AI detection tools, though they still prove inaccurate, often producing false positives or incorrectly flagging human writing that is too formal, uses predictable sentence structures, or employs punctuation commonly associated with generative AI.
The fear of being mistaken for using AI when submitting work or assignments has led people to intentionally alter their work to avoid possible penalization or plagiarism accusations.
A NEW APPROACH TO WORK
Derar Saifan, PwC Middle East Partner, senior digital transformation advisor, says that people no longer view LLMs as just tools but as thinking partners.
“They sit alongside you as you work, helping you frame problems, explore options, and move faster from question to decision.”
Saifan notes that this mentality is becoming common across areas such as software development, analysis, research, and content creation, where people can now move faster and improve quality by using AI to support their thinking, not replace it.
AI has also transformed the flow of people’s workdays, where employees now start by asking AI to summarise what’s happening across emails, meetings, and projects so they can prioritise more clearly. There’s also a shift away from searching for information toward asking better questions and co-creating outputs through conversation, especially as LLMs are embedded into everyday systems and applications.
“People are also becoming more intentional about choosing the right model for the right task, whether that’s summarising complex documents, generating code, or refining an idea,” Saifan notes. “What’s interesting is that this hasn’t reduced human responsibility, it’s increased it. People spend more time reviewing, challenging, and refining AI-generated outputs. The final judgment still belongs to the human.”
Dr. Fedaa Mohamed, Associate Professor of Journalism, Faculty of Mass Communication Ahram Canadian University, discusses the effect LLMs are having on students and professionals alike. She explains that LLMS have shifted the cognitive load from “creation to verification”, introducing what we call “performance anxiety.”
She explains that students and professionals are no longer just focused on the quality of the output; they are now also hyper-aware of how the output is perceived.
“This anxiety is intensified because the technology has effectively commoditized “competence,” raising the baseline so that clean technical execution is no longer a differentiator; instead, every individual contributor is forced into an “editor-in-chief” role where high-level judgment and taste are the only ways to add value.”
She also notes that the increasing use of LLMs is causing people to bypass the “blank page” phase to gain speed, inadvertently skipping the “productive friction” required to structure messy thoughts, risking a form of cognitive atrophy where they produce polished results faster but with a shallower grasp of the underlying problems.
Hebatallah Ghoneim, Associate Professor of Economics and Head of the Economics Department at the German University of Cairo, says large language models (LLMs) are transforming how people make decisions. They are increasingly used for routine tasks, such as drafting emails or checking understanding, influencing how information is processed and interpreted.
“For students, based on my experience, they rely heavily on LLMs to complete assignments,” she explains. “Often, these tools are used not to enhance learning or access information, but simply to reduce effort. This trend raises concerns about superficial engagement, diminished critical thinking, and the potential long-term impact on educational outcomes.”
A CAUTIOUS PROCESS
Mohamed says users are becoming more cautious and self-conscious about their use of AI-assisted writing tools in the wake of penalties becoming more common in professional settings. “The biggest signal is the intentional injection of “human imperfections” and distinct stylistic quirks. Writers are increasingly avoiding words heavily associated with LLM training data (like “delve,” “tapestry,” or “multifaceted”) and are instead prioritizing first-person anecdotes, conversational asides, and arguably less formal structures.”
She notes that professional environments are deliberately adopting rougher, less polished language to convey authenticity. This defensive approach goes beyond style to influence process, with professionals increasingly engaging in what she describes as “linguistic hygiene” by removing otherwise legitimate but overused jargon to avoid being flagged as AI-generated.
She adds that there is a growing emphasis on “defensive documentation,” with writers favoring cloud-based tools to retain version histories as evidence of human authorship, and intentionally using jagged, non-linear structures that resist the smooth, predictable patterns associated with algorithmic writing.
Saifan notes a growing number of formal policies governing the use of AI-generated content, as organizations introduce safeguards that reinforce human accountability. “The emphasis is very much on ‘human in the loop’, making it clear that AI can support the process, but ownership of the final message and its impact still sits with the individual.”
As a result, employees are less likely to accept AI-generated content at face value. “Instead, they work with it conversationally, refining the language, adding context, and shaping the message until it feels right for the situation. That iteration itself is a signal of greater awareness and caution.”
She adds that organizations are also becoming more deliberate in how they deploy AI, with many developing and sharing internal prompt templates. The aim, she says, is not only speed but greater consistency, quality, and risk reduction, by providing clearer starting points that minimize misinterpretation or misuse.
ORGANIZATIONAL GUIDANCE
“Organizations need to move from a culture of policing to one of disclosure. The focus should not be on ‘Did AI write this?’ but rather on ‘Did a human verify this and take responsibility for it?’” Mohamed notes.
She adds that policies should encourage the use of AI for structural heavy lifting and ideation, while requiring that the final voice and factual accountability remain strictly human.
“By bringing ‘shadow AI’ usage into the open through authorized disclosure, companies can ensure that while machines may accelerate workflows, reputational risk and strategic intent remain firmly anchored to the individual who validates and releases the work.”
Saifan says the key is having clear roles and responsibilities. “Humans should be ultimately the owners of the outcome, while LLMs are positioned only as tools for drafting, analyzing or exploring options.”
He adds that organizations should provide clear guidance on when and how to use LLMs, when review or disclosure is required, and where their use may be inappropriate. This clarity helps remove ambiguity, preserves originality through human judgment, and reinforces stakeholder trust.
“Investing in employee skills, how to frame problems, critique outputs, and apply context and ethics, is more effective than relying on controls alone. When transparency and accountability are embedded in culture, LLMs amplify creativity and productivity without diluting authorship or responsibility.”
Ghoneim emphasizes the need for educational institutions to monitor AI use. “AI tools cannot be completely stopped, so universities should focus on teaching students how to use them responsibly. This includes showing where AI can be helpful—like brainstorming ideas, refining language, or checking understanding—and where its use becomes inappropriate.”
She adds that clear guidance and open discussions about acceptable use can help students view AI as a learning aid rather than a shortcut, while still safeguarding academic integrity and fostering genuine skill development.
A GUIDE FOR PROFESSIONALS
Mohamed says professionals need to treat LLMs as a researcher or sub-editor, never as the final author.
“Use it to break writer’s block or organize messy thoughts, but always rewrite the final output in your own distinct voice,” she explains. “This approach effectively casts you as the ‘human-in-the-loop’ guarantor of quality, meaning you must rigorously verify every claim and data point against primary sources.”
She adds that by adopting a transparent workflow—where AI is clearly used as a tool rather than a replacement for talent—you protect yourself against accusations of laziness or fraud, ensuring that while the machine provides raw efficiency, the strategic nuance and ultimate accountability remain unmistakably yours.
Similarly, Saifan advises professionals to treat AI as an assistant and to exercise their own judgment before finalizing any output.
“Use LLMs to accelerate research, generate first drafts, or explore alternatives, but always review, adapt, and contextualize the output so it clearly reflects your intent and represents your perspective.”
Equally important, Saifan adds, is understanding the policies and guidelines of your country, organization, and profession. Know when disclosure is required, validate facts with trusted sources, and exercise caution in areas where precision, regulation, or ethics are critical.
“Used responsibly, AI reduces risk rather than increases it, but only when professionals remain clearly in the loop, owning both the reasoning and the result.”
Ghoneim says she reminds students before every assignment to use AI ethically and avoid copying and pasting content from AI writing tools. She explains that AI can help improve clarity, check language accuracy, and refine structure, but should only function as a support tool rather than a substitute for original work. Excessive reliance on AI-generated content, she warns, may amount to academic dishonesty or plagiarism.
She adds that while students are welcome to use AI to enhance the presentation of their ideas, the final submission must reflect their own research, analysis, arguments, and effort.






















