Krista Pawlowski still remembers the moment that changed her view on artificial intelligence. A longtime worker on Amazon’s Mechanical Turk platform, she reviews AI-generated text, images and videos—flagging errors and comparing outputs. Two years ago, while sorting tweets for racism, she encountered a phrase she didn’t recognize: “Listen to that mooncricket sing.” She initially marked it as non-racist, then later learned the term is a derogatory slur used against Black people.
“I kept thinking—how many times have I made this mistake before?” she said. Thousands of workers like her may unknowingly make similar errors, allowing bias, abuse and misinformation to slip into AI training data without detection.
Pawlowski’s perspective has shifted dramatically since then. She no longer uses generative AI and has banned tools like ChatGPT in her home. “AI is completely off-limits for my family,” she said. Each new task she sees on Mechanical Turk makes her wonder whether it could cause harm—and often, she believes it could.
Mechanical Turk workers choose tasks themselves, but the pay, instructions and timelines are set by clients. Many tasks involve rating content for companies building major AI systems. Dozens of workers who train models such as Google’s Gemini and Elon Musk’s Grok told The Guardian they now warn their families about AI’s limitations, with some forbidding children from using chatbots altogether.
One rater evaluating Google Search’s AI responses said he was alarmed by how confidently the system handled medical queries—despite workers having no medical expertise. He doesn’t allow his 10-year-old daughter to use any chatbot.
The biggest complaint from raters is pressure. Deadlines are tight, instructions are vague, and quality often takes a back seat to speed. “It’s impossible to maintain safety, accuracy or ethics this way,” said Brooke Hansen, a data worker since 2010.
Research by NewsGuard shows that while AI models stopped refusing questions—in 2025, refusal rates fell to zero—the share of repeated incorrect information jumped from 18% to 35%.
For workers who see AI’s hidden labor firsthand, the technology resembles a fragile system built on rushed data, low wages and environmental strain. “It’s like the early days of the textile industry,” Pawlowski said. “Once people start asking where things come from, change becomes possible.”