Socratic.ai

Tailor Your Scholarship Essays with Critical Thinkers

Socratic.ai is a platform that helps students draft scholarship essays, revealing hidden criteria behind scholarship essay prompts.

The result is a vector canvas-based platform powered by the claude API for multiple drafts organized visually, insight from scholarship winning drafts, and AI interaction that feels more like a conversation.

RoleProduct Design, Project Management
Timeline7 days
Team2 designers, 3 developers
Year2025
The Problem

Chat AI fails writers because writing is spatial. Chat is linear.

Generic AI writing tools produce generic outputs because they skip the thinking that makes writing personal. For scholarship essays especially, the quality of the thought behind the words determines the outcome — not the polish.

Chat interfaces compound this with a black box experience: users can't see the reasoning, can't compare alternatives side by side, and can't organize their thinking spatially the way writing actually works.

The Challenge

How do you make AI collaboration feel transparent and generative, not prescriptive? And how do you ship it in 7 days?

Socratic questioning model.

Instead of writing for users, the AI asks questions that guide them to articulate what's already there — making the output authentically theirs.

Video thumbnail
Socratic questioning model — app demo

Canvas-based spatial interface.

Writing is non-linear. The canvas lets users place, compare, group, and navigate multiple drafts at once, the way thinking actually works.

Canvas-based spatial interface
Discovery

We asked 10 people on campus about their experience with chat-AI interfaces.

We found that the root cause isn't the AI, but a fundamental mental model mismatch between how AI chat works and how writing actually happens in practice.

This produces three failure modes in chat-based AI writing tools — and all three pointed to the same solution: give users more agency over the process.

Initial Designs

Questions, not answers.

The core of Socratic.ai is a Socratic questioning model: instead of writing for users, the AI asks targeted questions that help them articulate what's already there. The output is authentically theirs because the thinking was theirs.

Video thumbnail
Socratic questioning in action

From thread to canvas.

We replaced the chat thread with a vector canvas that lets users place, compare, group, and navigate their ideas the way writing actually works — non-linearly, spatially, iteratively.

Canvas interface — spatial organization of AI insights and drafts

Surfacing what the prompt doesn't say.

Scholarship prompts are designed to be open-ended — but winning scholarship essays carry hidden criteria that committees never write down. Socratic.ai surfaces those criteria during the questioning process — so users write toward them without being told what to say.

Hidden criteria surfaced from the scholarship prompt
Team coordination — Notion kanban, when2meet, Discord
Video thumbnail
Final Designs

A Socratic model on a spatial canvas

Socratic.ai makes AI collaboration transparent and generative — not prescriptive. The Socratic questioning model surfaces authentic voice. The canvas gives writers spatial control over their thinking. The reasoning panel makes AI logic visible and evaluable.

Video thumbnail
Full app demo — Socratic questioning, canvas, reasoning panel
Canvas interface — spatial organization, draft comparison, criteria mapping
Closing

Final thoughts and learnings

Constraints made the decisions easierI expected seven days to feel limiting. Instead, it removed a lot of the noise. Every decision had to be justified right away, so there was no time to second-guess or over-explore. The design ended up sharper than it probably would have been with more runway.
The interface was the real problem, not the AII went in thinking the issue was output quality. It wasn't. It was that chat threads don't match how writing actually happens. Reframing it as a mental model problem changed everything about how we approached the solution.
Showing the reasoning changed how people used the AIUsers weren't skeptical of the outputs as much as they were skeptical of the process. Once they could see why the AI was asking a question, they engaged with it more seriously instead of just answering to get past it.
Being a good PM meant staying out of the wayI thought my job was coordination. It turned out to be removing blockers before the team hit them. Writing clear requirements, making scope calls early, and keeping async communication tight meant people could keep working without waiting on me.