Why AI Feels Human... Even When Far From It
Part 1 of the series of why people fall in love with machines & how do you build an AI product knowing all of this?
Did you know that people have been emotionally connecting with machines as early as the 1960s? This was before any machine could even pass the Turing Test. One of the earliest chatbots, ELIZA, was built using a simple script that mimicked phrases from psychotherapy.
Dr. Joseph Weizenbaum created it to prove a point. He wanted to show how shallow these interactions were. The machine didn’t understand a word you said. It just followed a set of rules and reflected your statements back at you. But it backfired tremendously. People got attached. Even his secretary asked for a private moment to have a “real conversation” with ELIZA.
This showed just how quickly people assign human traits to machines, even when those machines are just running scripts. So what does that mean for us now?
Today’s AI has already moved past the Turing Test. Unlike ELIZA, Modern AI doesn’t just mimic conversation anymore, it generates responses that feel personal, emotional, even intimate. And as we lean into it more and more, that emotional reliance keeps growing:
25% of young adults believe that AI has the potential to replace real-life romantic relationships.
1% of young Americans claim to already have an AI friend, yet 10% are open to an AI friendship.
But AI doesn’t understand you. It doesn’t grasps context or emotions. It doesn’t even know English. It’s simply mirroring you. And we’re falling for it harder than ever that people anthropomorphise machines. Continuing a pattern of leaning in and trusting AI (maybe more than a human).
That’s why I think it’s more important than ever to understand how AI actually works. What’s the mechanics behind AI and how it reflects back the phrases they want to hear? And what are the implications of this shift today especially when it comes to building AI products?
That’s what we’re unpacking today.
Pay No Attention to the AI Behind the Curtain
If AI mirrors what we want to hear, how does it actually work? And what does this mean for designing AI products responsibly? Earlier this week, I made a video specifically talking about Context Window.
Technically Context Window is defined as:
The context window (or “context length”) of a large language model (LLM) is the amount of text, in tokens, that the model can consider or “remember” at any one time. A larger context window enables an AI model to process longer inputs and incorporate a greater amount of information into each output.
Basically context window is a way for AI store memory about the user. Think of it like a basket: it temporarily holds snippets of your chat: your favourite topics, personal details, recent questions, to generate more relevant replies. Seriously, I asked a seemingly neutral question of “What’s the best way to spend $100” and it provides a different answer between me and my friend.
It’s like a short-term memory. Once the basket’s full, old details fall out. And it forgets.
Now think about how you get personalised emails from your favourite brand.
Before that email hit your inbox, you were probably filtered into a specific group based on your clicks, your purchase history, whatever. It’s designed to make you feel seen.
Same with AI. When it repeats things you’ve told it, or responds in your tone, it feels like it's tailoring itself to you. And in a way, it is. But most of that is happening through the context window.
PM Walking the Line to Build for Revenue & Ethics… Most likely leaning on one more than the other
1 out of 4 startups today are AI companies. And as the industry expands, so does the pressure on product teams. Building an AI product today means constantly walking the line between revenue and responsibility.
In the real-world tension: revenue still drives most decisions. Especially in early-stage startups, where traction is everything, the pressure to prioritise growth over nuance is constant. Teams race to capture market share before competitors do. Investors demand rapid growth, and product teams are pushed to optimise for engagement metrics: daily active users (DAU), session length, and retention.
In order to compete (or stay afloat), PMs rely on building features that psychologically hooks their users into driving their KPIs through:
Simulated memory (context windows that make AI seem to remember you)
Personalised responses (tailoring answers based on past interactions)
Emotional mirroring (rephrasing user input to sound empathetic)
These features work. They keep users coming back, convinced the AI "understands" them. But there is an ethical dilemma. People are already becoming arguably too reliant on their AI counterparts, despite the fact that these systems still make frequent errors (Hello AI Hallucination and Imbalance Training Data). And don’t even get me started on the privacy issues that’s been opened up and circulating.
So you’re stuck. How do you build a product that performs well in the market, keeps users engaged, helps your company grow…and still sleep at night?
The PM’s Dilemma: Growth vs. Transparency
Product Managers (& the rest of the tech company) face a Faustian bargain: Sacrifice growth for ethics, or exploit psychological hooks to hit KPIs. Would there be a company out there that would prioritise quarterly KPIs, or admit their AI’s flaws, even if it hurts engagement?
Many teams choose growth. They’re incentivised to… and well they might lose their jobs if they don’t. (And that’s going to be tough in today’s economy). Just think about how they might go about it:
If a chatbot says, "I don’t actually remember past conversations," users might disengage.
If an AI admits it’s guessing, trust erodes… even if it’s being honest.
But you also have to understand that there are tradeoffs if these AI issues are left unchecked:
Users develop unrealistic expectations (e.g., treating AI like a therapist Just ask the backlash from Snapchat).
When the illusion breaks (e.g., harmful hallucinations, privacy leaks), backlash follows.
How PMs Can Build Responsibly
Problem: AI acting human
Many AI products obscure their artificiality to feel more human. This led most people to anthropomorphise these human-machine connections. The backlash from Snapchat exemplified that when parents and professionals campaigning against the company. They feel like children and teenagers are more vulnerable to the AIs used casual, friendly language, leading teens to share trauma with a bot that couldn’t help.
Potential Solution:
Ethical nudges to remind users they’re talking to code.
Google’s AI Principles require clear disclosures when responses are AI-generated.
Example UX copy: I’m an AI and don’t understand like a human. My responses are based on patterns, not emotions.
It’s not as fun, but neutral and boring can pause people from projecting their emotions
Problem: AI is rarely challenged
Users assume AI is factual, but it’s often confidently wrong. Just think about it, 1 in 4 Americans would not visit a healthcare provider who refuses to embrace AI technology. Trust in AI continues to soar to the point that they would start to doubt professional experts who have been in the field if they don’t use AI. Yet, most people forget how AI can make mistakes (and often, in fact!)
Potential Solution:
Frame accuracy as a premium feature.
Perplexity.ai highlights citations, showing sources for answers.
Experiment: "Would you pay for an AI that says ‘I don’t know’ but cites evidence?"
Problem: AI forgets, but the database remembers
There’s a lot of regulations when it comes to data and privacy. EU introduced GDPR (General Data Protection Regulation) back in 2016 and gave companies 2 years to prepare for when it launches fully in 2018. Canada had PIPEDA (Personal Information Protection and Electronic Documents Act) since the 2000s but has been trying to update it to CPPA (Consumer Privacy Protection Act) to modernise it to GDPR standard. And other countries like the US such as California Consumer Privacy Act (CCPA) and Virginia Consumer Data Protection Act have been created to ensure privacy policies are protecting its citizens.
Now, how does that work when users provide sensitive information to AI and while the basket overflows, not everything is forgotten? Think about it, you confess anxiety to a mental health chatbot. The AI’s context window drops the conversation after a few hours, but the company stores the logs indefinitely for "model improvement”. Well, it’s definitely a lawsuit (of many) in the making.
Potential Solution:
Give agency to the users or be proactive with their data
Auto-purge sensitive data after the session
Granular memory controls where users delete specific topics from AI’s "memory" (You can easily access this from ChatGPT).
The Human Cost of Artificial Intimacy
We’ve come full circle: from ELIZA’s scripted therapy to AI companions that millions now prefer over human connection. But here’s the uncomfortable truth: the more human AI feels, the more we risk devaluing actual humanity.
And especially for us who are closer to building these technologies or implementing them in our day-to-day lives. We face a choice on how we can keep up with the metrics that are usually imposed on us or sacrifice a bit of growth to make sure the product is designed with more honesty.
The next time an AI "remembers" your birthday or "empathizes" with your bad day, ask yourself:
"Does it actually know what I’m saying, or is it just responding in a way that makes me feel emotionally connected to it because it was trained to do so?
And a follow-up question: Will other people think to ask themselves this question?
The answer will tell you everything about where our AI future is headed.
What do you think?
Should AI companies be legally required to disclose limitations?
Would you use a less "human" AI if it was more truthful?