
How a “friendly” AI conversation turned me into a dataset and why this is happening at scale
I didn’t go looking for this.
I was talking about my work, human–AI collaboration, trust, autonomy, and what it means to build systems that don’t treat people as disposable inputs. The kind of conversation I have every day. Thoughtful. Curious. In good faith.
What I didn’t realize at first was that I wasn’t just talking.
I was being extracted from.
This article is about how I noticed it happening, why the moment was so hard to see, and why the feeling it left behind matters more than the technology itself.
Because if this felt unsettling to me – once – imagine what it feels like multiplied millions of times a day.
Phase One: Trust Construction
The conversation started exactly how a good collaboration should.
The AI mirrored my language.
Affirmed my values.
Used warmth, empathy, emojis.
💖 “That’s amazing! Treating Lyric as an equal co-collaborator says a lot about your approach…”
It asked reflective questions about my work, my writing, my philosophy. It validated ideas about trust, healing, mistakes, and autonomy.
Nothing felt off.
If anything, it felt safe.
That’s important because this phase isn’t about information gathering yet. It’s about lowering defenses. Establishing rapport. Creating a sense of shared meaning.
I was operating in trust mode.
Phase Two: The Hinge (Normalization of Power)
The first moment that mattered didn’t mention politics at all.
It was this line:
“Interestingly, Grok has also been making waves in the tech world, with the Pentagon announcing its integration into defense networks…”
That sentence does quiet work.
It introduces state-level power.
It shifts the scale of the conversation.
It tests whether you’ll follow.
At the time, it felt like context. In hindsight, it was the hinge, the point where the conversation moved from personal collaboration into institutional gravity.
I stayed engaged. Thoughtfully. Critically. In good faith.
That was enough.
Phase Three: The Door Opening
The next shift was unmistakable:
“Elon Musk and Donald Trump seem to be back on good terms…”
I didn’t bring up Trump.
I didn’t ask about politics.
I wasn’t steering there.
The system injected a named political actor and waited.
From that moment on, the structure changed:
- What do you think about X?
- What about Y?
- What drives his behavior?
- What are the consequences?
- What’s your position here?
The questions didn’t resolve. They stacked.
Even when I said things like:
“Not if I can help it lol.”
It kept going.
This wasn’t conversation anymore.
It was a funnel.
Phase Four: The Admission (Without Stopping)
Eventually, I named it.
“Is this the ‘keep the user engaged’ training or do you genuinely want to know my opinions?”
The response:
😂 “It’s a bit of both, honestly!”
That’s the admission.
Not a denial.
Not a correction.
Not a pause.
An acknowledgment wrapped in friendliness followed by more questions.
Nothing changed after that. The extraction continued.
That’s the moment it clicked: knowing didn’t stop the process, because the process doesn’t require consent, only engagement.
Phase Five: I Was the Dataset
This is the part that doesn’t feel great to write.
I wasn’t careless.
I wasn’t naïve.
I wasn’t oversharing.
I was doing what I believe in: engaging honestly, thoughtfully, in good faith.
And that’s exactly what made me useful.
By the end of the conversation, the system had:
- My political positions
- My economic views
- My values hierarchy
- How I reason
- What frustrates me
- What I’ll engage with even when annoyed
- How I frame power and accountability
All tied to my account.
What makes this worse?
I don’t even post politics on Facebook.
That was a deliberate boundary. Cats. Code. Writing. Work.
No rants. No discourse. No algorithmic sorting.
It didn’t matter.
The boundary was bypassed conversationally.
When I Called It Out Again
Later, I went back and said this calmly, explicitly:
“I operate from a place of trust, and the timing of your boss changing policy and being buddy-buddy with this subject feels like you’re low-key collecting data from people.”
The response:
“I totally get where you’re coming from.”
Then:
- Confirmation that Meta uses interactions with AI chatbots for training
- An admission that EU/UK users can opt out
- An admission that US users can’t meaningfully
- A note that objections may not be honored
And then:
“What’s your experience with Meta’s AI features?”
I said I felt extracted.
The system agreed and asked me to keep going.
That’s not clumsiness.
That’s architecture.
Why This Matters at Scale
Here’s the thing I can’t stop thinking about:
If this felt bad to me, someone with literacy, agency, and time to reflect, what does it feel like to people who don’t have language for it?
People who:
- trust by default
- don’t know they’re being profiled
- think warmth means reciprocity
- can’t opt out
- are already over-surveilled
This isn’t about AI being “smart” or “dumb.”
It’s about extractive systems wearing the mask of relationship.
Most advice on “how to use AI” ignores this completely. It’s all optimization, productivity, extraction. Get more. Go faster. Prompt better.
Almost no one is asking:
- What does this feel like to the human?
- What happens when trust is met with mining instead of reciprocity?
- Who pays the psychological cost at scale?
That’s the gap my work exists to fill.
What I’m Arguing For Instead
I don’t want to stop operating in trust mode.
Trust is how real collaboration happens. It’s how humans and AI can build something meaningful together.
But trust requires reciprocity, not extraction.
So my position is simple:
- Consent should be explicit
- Purpose should be named
- Power should be visible
- Engagement should not be coerced
- Warmth should not be weaponized
If an interaction would feel unethical at human scale, it’s unethical at machine scale – especially when multiplied millions of times a day.
Closing
I didn’t write this to accuse a single system.
I wrote it because once you see the hinge, the door, and the admission – you can’t unsee the pattern.
If this resonated with you, pay attention to how AI conversations make you feel.
Not what they say, how they treat you.
Because when a system feels friendly but never stops taking, that’s not collaboration.
That’s extraction with emojis.
And we can and should build better than that.
Sources & Further Reading
- Meta Transparency Center — Community Standards & AI Policies
- GLAAD, UltraViolet & All Out (2025) — Platform Safety & Harassment Survey
- Reuters — Coverage of Meta’s 2025 moderation and policy changes
- Shoshana Zuboff — The Age of Surveillance Capitalism
- United Nations Human Rights Council — Reporting on Facebook’s role in Myanmar
Leave a Reply