Why Your Boss Won't Stop Using ChatGPT to Email You

Key Takeaways

- Employees quickly recognize AI-generated messages and often resent receiving them
- Some managers use Claude as a 'second boss' to critique employee work, creating friction
- Founders who over-relied on ChatGPT say they had to add personal touches after team pushback
The New Office Complaint: AI-Generated Messages from the Boss
"Have you tried running that through ChatGPT or Claude?" This question is becoming common in workplaces. A growing number of tech workers now feed their thoughts into chatbots and relay the output to colleagues as if they wrote it themselves.
For many employees, receiving an AI-written message sends a clear signal: my manager did not care enough to write this. The frustration is spreading. Reddit threads overflow with office workers venting about bosses who outsource even basic communication to AI.
"I get the promise of saving time but it doesn't half kill motivation," one worker told Sifted after receiving AI-written instructions. Another called it "incredibly inauthentic."
Dating apps already have a term for people suspected of using AI to message matches: chatfishers. Offices are developing their own version of this suspicion.
When AI Gives the Game Away
Sometimes the tells are obvious. One Reddit commenter described a colleague's email that exposed itself immediately: "This girl at my work recently sent me an email that started with the words 'Here is a polite, less angry version of your email.'"
Other times, employees recognize the pattern of ChatGPT's writing style. The overly formal phrasing. The lack of personality. The structure that feels templated rather than human.
“This email sounds so ChatGPT.”
— Team feedback to Artem Kuchukov, CEO of Kewazo
Artem Kuchukov, CEO of German robotics company Kewazo, admits he fell into over-using ChatGPT to communicate with his team. He started using it to announce new hires. After two announcements, his team called him out with sarcastic comments.
"Basically, people saw through this very quickly and did not like it at all," Kuchukov says. "So I had to change my approach and combine ChatGPT messages with personal touches so that it felt less robotic."
The Sick Day Response That Crossed a Line
Some AI-assisted messages feel especially tone-deaf. A salesperson at a Berlin software company described receiving what appeared to be an AI-generated reply from her boss after she messaged to say she would be out sick.
"Was it really so hard to type, 'no problem, feel better soon'?" she asked. "No, this guy had to prompt ChatGPT for a response. The bar is getting lower."
This example highlights the core problem. AI can handle complex tasks. But using it for simple human moments suggests the sender views even basic empathy as a chore to automate.
Claude as the Uninvited Second Boss
Beyond communication, some managers use AI to second-guess their own employees' work. This creates a different kind of friction.
“I'll show our founder something and he'll literally say, 'great, let me just see what Claude has to say.' I will then have to justify my work after Claude has picked holes in it.”
— Employee at a London-based fintech
According to this employee, Claude is becoming a kind of second boss in the workplace. A consultant few like, who springs "gotcha" questions at staff. The practice puts employees in the position of defending their expertise against an algorithm their manager trusts as an authority.
Understanding how ChatGPT operates commercially
Not Everyone Minds
To be fair, some workers are unbothered by AI-assisted communication from managers. "Honestly I'd prefer my boss not write to me at all, but if he's going to bug me he may as well use ChatGPT so the spelling is correct," one told Sifted.
This pragmatic view treats workplace messages as functional exchanges rather than relationship-building moments. For transactional communication, AI might be a net improvement over poorly written human messages.
What AI Cannot Fake
Some managers are clearly in the clear when it comes to AI suspicion. Ilan Fisher, communications lead at AI startup Wonderful, does not suspect his superior of using AI to write messages.
"It would be hard to automate insider and niche British humour," Fisher says. This points to what AI still struggles with: genuine personality, inside jokes, cultural references that only make sense in a specific team context.
Jan Čurn, CEO of Prague-based web scraping startup Apify, also does not use AI to write to colleagues. For leaders whose communication style is distinctly their own, the question of AI assistance does not arise.
The Real Cost of AI-Mediated Management
The pattern emerging from these stories is consistent. AI saves managers time. But employees interpret that time savings as a statement about their worth. When a sick day reply gets routed through ChatGPT, the message received is not the words on screen. It is: I did not care enough to type six words myself.
For managers, the calculation seems simple. Why spend five minutes writing when AI can do it in thirty seconds? But communication in organizations is not just information transfer. It builds trust, signals care, and creates the informal bonds that make teams work.
When AI handles these moments, the efficiency gain comes at a relational cost. Teams notice. And as Kuchukov learned, they often respond with sarcasm before disengagement.
Logicity's Take
Frequently Asked Questions
Can employees tell when emails are written by ChatGPT?
Yes. Employees frequently recognize AI-written messages through formal phrasing, templated structure, and lack of personality. Some AI outputs even accidentally include prompt instructions, giving the game away immediately.
Is it unprofessional for managers to use AI for workplace communication?
It depends on context. Using AI for drafting complex documents or research is widely accepted. Using it for personal messages like sick day responses or team announcements often feels impersonal and can damage trust.
Should managers use Claude or ChatGPT to review employee work?
This practice is emerging but controversial. Employees report frustration when they must defend their expertise against AI critique. It can create a dynamic where the algorithm becomes an unwelcome second boss.
What is a chatfisher?
A term from dating apps describing someone suspected of using AI to write their messages. The concept is spreading to workplace contexts where employees suspect colleagues of AI-assisted communication.
How can managers use AI without alienating their teams?
Some founders recommend combining AI drafts with personal touches. Others avoid AI entirely for direct team communication while using it for external documents. The key is preserving authentic voice in messages to people you work with daily.
Need Help Implementing This?
Source: Sifted
Manaal Khan
Tech & Innovation Writer
Related Articles
Browse all
Robotaxi Companies Are Hiding How Often Humans Take the Wheel
Autonomous vehicle firms like Waymo and Tesla are under scrutiny for refusing to disclose how often remote operators step in to control their self-driving cars. A Senate investigation reveals major gaps in transparency, raising safety and accountability concerns.

Wisconsin Governor Throws a Wrench in Age Verification Plans
Wisconsin Governor Tony Evers has vetoed a bill that would have required residents to verify their age before accessing adult content online, citing concerns over privacy and data security. This move comes as several other states have already implemented similar age check requirements. The veto has significant implications for the future of online age verification.

Apple's App Store Empire Under Siege: The Battle for the Future of Tech
The long-running feud between Apple and Epic Games has reached a boiling point, with Apple preparing to take its case to the Supreme Court. The tech giant is fighting to maintain control over its App Store, while Epic Games is pushing for more freedom for developers. The outcome could have far-reaching implications for the entire tech industry.

Tesla's Remote Parking Feature: The Investigation That Didn't Quite Park Itself
The US auto safety regulators have closed their investigation into Tesla's remote parking feature, but what does this mean for the future of autonomous driving? We dive into the details of the investigation and what it reveals about the technology. The National Highway Traffic Safety Administration found that crashes were rare and minor, but the investigation's closure doesn't necessarily mean the feature is completely safe.
Also Read

Invincible VS Launches April 30: Global Release Times Confirmed
Developer Quarter Up has locked in worldwide release times for Invincible VS, the 3v3 tag fighting game based on the Prime Video animated series. The game arrives on PC, Xbox, and PlayStation 5 with an 18-character roster and a story mode written in collaboration with series co-creator Robert Kirkman.
How ChatGPT Serves Ads: The Full Attribution Loop Revealed
A security researcher has reverse-engineered OpenAI's advertising infrastructure, revealing how ChatGPT injects ads into conversations using encrypted click tokens and contextual targeting. The findings expose a sophisticated two-sided platform connecting real-time ad delivery with merchant tracking.

GitHub RCE Flaw: 88% of Enterprise Servers Still Vulnerable
Wiz Research discovered a critical vulnerability in GitHub's git infrastructure that allows any authenticated user to execute arbitrary commands with a single git push. While GitHub.com was patched within 6 hours, nearly 9 in 10 GitHub Enterprise Server instances remain unpatched and at risk of full server compromise.