Creating Your Own Voice: How to Introduce Yourself to Claude (and Why You Need To)
My first AI-assisted article was polished, accurate, and completely unrecognizable as mine. The problem wasn’t the model. It was that I never taught it how I think. If you want AI to sound like you, you have to externalize your reasoning first.
The first article I wrote with Claude read like every other AI-assisted piece floating around LinkedIn. Competent. Correct. Utterly hollow.
I remember sitting with the draft open. It was about something I genuinely cared about, a product strategy concept I'd wrestled with for years and taught dozens of young product managers to navigate. But reading my own words back, I couldn't find myself in them.
This was the problem I was trying to solve.
The Setup: Three Constraints That Don't Play Well Together
I'm a non-native English speaker. Twenty years in the United States, fluent at work, comfortable in meetings, but I never sat through an American high school English class or undergraduate literature seminar. I learned English through medicine and later through work in product management. My relationship to the written word is different from someone who grew up reading Steinbeck and Hemingway. I learned to write the way I learned to read chest X-rays: by pattern matching, by supervised practice, by correction from people who knew better than I did.
I also discovered, years after finishing medical school, that I have ADHD. This is relevant because my brain doesn't work the way I thought it did. I collect information like someone with a vast filing cabinet organized by interest rather than hierarchy. When I sit down to write without externalizing the mess, I get stuck in loops. I need to think out loud. I need to write things down and move them around. I need to assemble pieces, step back, see patterns, then write forward. The traditional "sit down and write" approach doesn't work for my brain.
And here's the third constraint: I wanted to use AI as a writing partner, not as a writing replacement. I wanted Claude to handle the polishing, the structural fine-tuning, the narrative flow. I wanted to preserve my energy for the part that actually exhausts me. But I didn't want the output to sound like AI wrote it.
This last constraint is the one that matters. I didn't want my articles to look and sound like AI-generated content. Social media is flooded with competent, soulless AI writing that inflates follower counts without saying anything worth saying. I was trying to bring something more.
The Broken Assumption
Here's what I thought would work: I'd write a detailed brief for Claude. Context, key points, maybe some phrasing I liked. Claude would understand who I was from that. It understood the ideas, after all. It could match the tone.
It couldn't.
The first few articles felt generic because they were generic. Claude was good at capturing what I said, but not who I was. The difference is subtle and everything. A generic article about clinical decision-making gets lost in the noise. An article that sounds like it came from someone who actually had to make those decisions at 3 AM in a patient room? That stops someone scrolling.
I realized the issue wasn't Claude's fault. It was mine. I was asking Claude to reverse-engineer my thinking style from scattered examples and general direction. That's not how language models work. They work better when you tell them explicitly: this is who I am, this is how I think, this is what my voice sounds like, these are the sentences only I would write.
This realization led me to something that actually changed the quality of every draft that came after. It meant building Claude something it had never seen before.
The Lightbulb: A Knowledge Graph for Yourself
I'd written an article a few months before about knowledge graphs in clinical reasoning. By "knowledge graph," I don't mean something technical or fancy. I mean an explicit map of concepts and relationships that usually only exist in your head. When you externalize your thinking model, when you make visible the connections between ideas instead of keeping them locked away, you make something communicable. You make something transmissible.
That's when I realized what was missing. I wasn't handing Claude a knowledge graph of my voice. I was handing it scattered pieces and hoping it would assemble the architecture.
So I decided to build one. Not a knowledge graph of product frameworks or decision trees. A knowledge graph of me. Of how I actually think. Of what makes a sentence sound like it came from my brain and not from a language model trained on the entire internet.
The Process: Interviewing Yourself
I started by letting Claude interview me. Not about my background or credentials. About how I actually work.
I answered questions like: What do you believe that would surprise people in your field? How do you actually write? When you're editing your own work, what are you looking for? What makes you close an article and stop reading? What are the clichés in your industry that make you angry?
The questions were structured, and they forced specificity. Not "what's your writing style" but "if you found a sentence with 'Additionally' or 'Furthermore,' what would you do?" The answer: delete it. There's no thought that needs an "Additionally." There's just the next thought.
Not "what's important to you" but "what would make you stop trusting someone who's supposed to know healthcare?" The answer is specificity. Citation. Awareness of nuance. Someone who says "AI will solve healthcare" has never actually sat with patients. Someone who knows healthcare says "AI solves this specific problem in this specific context," and they can name what that context actually is.
I had Claude analyze my published articles, not for what they said but for how they were built. What sentence construction did I use when I was introducing a contrarian idea? When was I most direct, and when was I most careful? Where did I use narrative beats instead of just stating facts? What made my closing line land?
The analysis extracted patterns: short declarative contrast when I want to disrupt an assumption. Scene-before-argument when I'm asking someone to question something comfortable. A particular way of naming problems that sounds like witness testimony instead of abstract complaint. A closing that doesn't summarize but implies what comes next.
These weren't rules I knew I was following. They were visible only when I stopped to look.
In practice, building your own profile means three things: let an AI interview you with highly specific questions about what you actually believe and how you work. Analyze a few of your best pieces for structure and patterns, not just content. Turn those observations into an explicit profile you can hand to your AI tool and reuse.
The Thing That Changed Everything
Here's what shifted my understanding: my profile included a section on what I actually believe, grounded in my lived experience.
My beliefs about AI in healthcare aren't abstract. I've worked in healthcare and in AI. I've seen the failures up close. I've sat in rooms where good people with good intentions built systems that didn't solve the problem they thought they were solving.
The same goes for how I work. Non-native English speakers write differently because that's how I learned to write. People with ADHD brains collect information in networks, not hierarchies. I think in product frameworks first, then translate to prose.
These aren't personality quirks. They're the architecture of how I actually think.
When Claude understood this, things changed. It wasn't that it suddenly wrote exactly like me. It couldn't. It's not human. But it understood what to amplify and what to remove. It understood which choices were authentically mine and which were template defaults. It could polish without homogenizing.
Why This Matters Beyond My Byline
I'm telling you this because the moment of realization was larger than just better articles.
The problem with most AI-assisted writing isn't the AI. It's that people treat AI like a tool to fill in blanks. They assume that feeding it information and asking for output is enough. They don't realize they're outsourcing the part that actually matters: the part that's supposed to be you.
If you want AI to help you write like yourself, you have to make yourself visible. Not your resume or your expertise or your job title. The actual architecture of your thinking. How you reason. What you notice. What you believe because you've seen it, not because you read it somewhere.
For me, that meant spending time answering hard questions. It meant sitting with my published work and reverse-engineering it. It meant being specific about what made me uncomfortable in writing, what made me sound like everyone else, what made me sound like me.
It wasn't quick. It wasn't easy. I've now spent probably ten to fifteen hours building and refining this profile, and I'll probably spend more as I write and learn what works and what doesn't.
But here's what happened: I now have a resource I can hand to Claude. Not instructions. Not a style guide. A model of how I think. A translation layer between my ideas and the final prose.
And when I read the latest draft, I recognize myself in it.
The Broader Pattern: Beyond Writing
What I'm describing applies well beyond social media and articles. Consider a clinician using an AI decision-support system. A doctor might want to build a profile that teaches the tool how she thinks about evidence, how skeptical she is of certain claims, what trade-offs matter to her in patient care, where she's willing to take calculated risks and where she demands certainty.
Or a product leader working with AI to evaluate strategy. She'd want the system to understand not just what she values, but how she reasons through constraints. Her appetite for certain kinds of failure. The specific questions she asks about user behavior. The way she balances speed against durability.
In each case, the "human profile" isn't about personality preferences or writing style. It's about making your decision-making visible to a tool that can then amplify your judgment rather than replace it. It's about building a model of how you think, so the AI doesn't default to generic answers. It gives you back your voice, your judgment, your expertise, shaped by your lived experience.
The principle is the same across domains: the tool gets better when it understands not just what you know, but who you are.
The Implication
The deeper insight, the one that keeps coming back, is this: building a voice profile isn't actually about AI optimization. It's about knowing yourself well enough to communicate what you actually think.
It's about the difference between having thoughts and being able to defend them. Between sounding smart and being able to prove you've learned something real.
Most people never do this work. They write quickly, publish quickly, move on to the next thing. The internet rewards speed. It doesn't require depth.
But if you want to write something that lasts, that sounds like an actual human being with actual convictions, you have to do the harder work first. You have to know not just what you think but why you think it and how you arrived there.
You have to externalize your model.
Once you've done that, AI becomes useful. Not as a replacement for the thinking. As a tool for the communication. As a way to take what you know, what you've lived through, what you genuinely believe, and make it clear enough that someone else can see it too.
The knowledge graph isn't for Claude.
It's for you.