Designing an AI Companion That Goes Beyond Answers: Helping People Find the Right Questions to Ask
In a world where our screens often promise connection but deliver emptiness, a growing number of people find themselves alone, not in the physical sense, but mentally adrift. After work, countless individuals pick up their phones for “just a few minutes” of mindless scrolling, hoping for a spark of comfort or distraction. Hours slip by. Instead of finding clarity or support, they’re left with fatigue, guilt, and an unsettled mind.
At our startup, we set out on a mission to tackle this subtle but deeply human struggle: How can we create an AI-powered companion that helps people feel less lonely, but also more clear-minded, self-aware, and truly seen?
“Technology has given us infinite access to answers,” writes digital wellbeing researcher Dr. Cal Newport. “But what most people lack is a framework for asking better questions — and the space to reflect on them.”
Our journey has been anything but linear. Like many teams building AI companions, we began with an obvious idea: let’s build a robot friend that can chat anytime. A friend who is always available, who has the answers, who will listen. The initial excitement was palpable, after all, conversational AI has become astonishingly capable thanks to breakthroughs in large language models (LLMs). But as we soon discovered through countless rounds of user testing, being able to chat and give answers isn’t enough.
The False Comfort of Mindless Chatting
At first glance, it seems intuitive: people feel lonely, so give them someone (or something) to talk to. Yet our early prototypes taught us a critical lesson. For some, chatting with an AI did help in the moment, like a dopamine hit from scrolling social media. But the effect often faded quickly. Many of our users described a hollow feeling afterward, eerily similar to doomscrolling TikTok or Instagram: a burst of relief, then a return to the same problems, questions, or anxieties.
We call this the “shallow support loop.” Research from the American Psychological Association (2023) indicates that while AI chatbots can provide short-term relief, they can inadvertently reinforce avoidance behaviors. “Users may bypass deeper self-reflection or human connection when they become overly reliant on chatbots for surface-level advice,” explains Dr. John Torous, Director of Digital Psychiatry at Harvard Medical School.
In other words, we were at risk of building yet another tool for mindless scrolling, but with a more sophisticated interface.
A Turning Point: What Are People Really Searching For?
The real breakthrough came when we stopped focusing on how well our AI could answer questions and started observing how our users felt when they left a conversation. Did they feel understood? Did they see their situation more clearly? Did they feel motivated to act?
In dozens of follow-up interviews, one pattern emerged: People almost always knew the answer they wanted — or at least part of it. What they struggled with was seeing the bigger picture. They weren’t looking for a new fact. They were trying to figure out if they were even asking the right question in the first place.
Take one example: A young professional, overwhelmed at work, asked our bot: “Should I quit my job?” An average chatbot might respond with pros and cons or a generic checklist. But what our user needed wasn’t an answer, it was help unpacking why they felt this way. What did “bad job” really mean? What were they afraid of if they stayed? What were they avoiding by fantasizing about quitting?
The gap became clear. The true value wasn’t giving quick answers, but helping people navigate the fog to reach the question underneath the question.
“The quality of your life is determined by the quality of the questions you ask,” wrote Tony Robbins decades ago. Modern research backs this up: a 2020 Harvard Business Review article found that reflective questioning improves decision satisfaction, problem-solving, and emotional wellbeing more than direct advice-giving”
What Large Language Models Miss
Current LLMs, like GPT-4 and others, excel at giving information or drafting text. They mimic conversation well enough that it’s tempting to believe they understand you. But these models often reinforce the user’s surface-level query. Their design is to predict and generate relevant text, not to slow you down, challenge your framing, or guide you through the discomfort of introspection.
To put it simply: LLMs give you answers, but rarely help you find the real problem.
“A major limitation of today’s AI companions is that they optimize for user engagement, not user growth,” said Dr. Margaret Mitchell, AI researcher and former co-lead of Google’s Ethical AI team, in an interview with Vox. “Sometimes the best thing an AI can do for you is to ask, ‘Why do you feel this is the question you want to solve?’ — and that’s fundamentally different from giving a quick fix.”
Our product vision shifted dramatically. Instead of an infinite Q&A buddy, we’re building a structured yet compassionate thinking companion, a robot friend who does not just chat endlessly, but actively helps you peel back layers of your thoughts, examine your assumptions, and articulate what truly matters to you.
The Power of “Better Questions”
This isn’t a novel idea in therapy or coaching. Cognitive behavioral therapists, for example, use Socratic questioning techniques to help clients examine distorted beliefs. Executive coaches are trained to ask “why” five times — a simple but powerful tool to dig deeper into the real obstacle behind a surface issue.
In our user research, we discovered that the mere act of “seeing your thoughts reframed” brings relief. Many people said things like: “I’d never looked at it that way. Now I see what’s really bothering me.”
The problem is, very few people have a trusted coach or therapist available at any moment. And even fewer can pause mid-doomscroll and redirect their mind toward self-reflection.
That’s where we believe our AI companion has the potential to do something uniquely valuable: Turn moments of mindless scrolling into moments of mindful discovery.
The Companion Robot Friend: Not Just Another App
Of course, building this is easier said than done. We’re tackling some thorny design challenges:
Emotional Safety: The AI must know when to push deeper and when to hold space gently. We’re developing safeguards and training it to respect boundaries.
Personalization: Everyone’s mental model is different. One person might need gentle curiosity, another might thrive with direct prompts that challenge their assumptions.
Always Available, Never Addictive: Our goal is not to keep people glued to the screen, but to help them close it with a clearer mind.
As technologist @Tristan Harris famously said, “Technology should enhance your life, not exploit your attention.” Our mission is to live up to that ideal.
A Vision for a More Meaningful Life
At the heart of our product is a simple belief: Everyone deserves a companion who helps them become more of who they want to be.
Quote to live by:
“A mind stretched by a new question never returns to its original shape. ”
About Us: We’re a small, dedicated team of designers, engineers, and psychologists who believe technology can do better. Our goal is not just to build a smart chatbot — it’s to build a true companion, one that helps you live a more meaningful, connected, and intentional life.
Stay tuned as we share more about our progress, our research, and the stories of people discovering the questions that change everything.
Sources:
Newport, C. (2021). Digital Minimalism: Choosing a Focused Life in a Noisy World.
Torous, J. et al. (2023). Chatbots and Mental Health: Potential and Pitfalls. APA Digital Psychiatry Report.
Robbins, T. (1985). Unlimited Power.
Harvard Business Review (2020). Why the Best Leaders Ask the Right Questions.
Vox Interview with Dr. Margaret Mitchell (2023).