You Don’t Find Yourself — You Uncover Yourself

People talk about “finding themselves” as if the self is somewhere out there, waiting to be discovered. A new city, a new career, a new relationship — and maybe that’s where the answer lives.

But I don’t think it works that way.

All of it — meditation, spirituality, sitting quietly with nothing to do and nowhere to be, genuine reflection — what it’s actually doing is removing dust. That’s it. The dust of other people’s expectations. The dust of habits you picked up without choosing them. The dust of noise that’s been accumulating since you were old enough to be told who you should be.

The self underneath isn’t lost. It’s just covered.

When you sit still long enough, when you stop performing and planning and reacting — something becomes clearer. Not a revelation from the outside. Something from the inside that was already there. Your actual perspective on things. What genuinely matters to you, not what you’ve been told should matter. The particular way you see the world that no one else quite sees the same way.

That’s your truth. And it was never missing.

The reason practices like meditation feel profound isn’t because they add anything. It’s because they subtract. Less noise, less reactivity, less borrowed identity — and gradually, you start to see yourself more accurately. Your real strengths. Your actual values. What you bring to the world that’s genuinely yours.

The truth is within. It always has been. It just needs clarity to surface — and clarity doesn’t come from doing more. It comes from pausing long enough to let the dust settle.

Why One Waymo Accident Feels Worse Than a Thousand Human Ones

When a Waymo car gets into an accident, the whole fleet goes on trial. Every self-driving car is suddenly suspect. Every algorithm gets questioned. The incident becomes a referendum on whether autonomous vehicles should exist at all.

When a human driver causes an accident — and tens of thousands do, every day — it’s that person’s fault. Not humanity’s. Not the auto industry’s. Not every other driver on the road. Just that one individual who made a mistake, had a bad day, or was distracted for a moment.

Think about what that actually means. If autonomous vehicles caused one accident for every thousand human accidents, they’d still be considered the dangerous ones. The math doesn’t matter. The optics do. And the optics are that Waymo is a single entity — a company, a system, a hive — while humans are individuals. We extend grace to individuals. We don’t extend it to corporations or machines.

This isn’t entirely irrational. It’s just the way human psychology works. We’ve always judged collective systems more harshly than individuals. We expect institutions to be perfect in a way we’d never expect of a person. And when they’re not, we feel betrayed.

But here’s what this means going forward: this same dynamic is going to follow AI everywhere. Every AI-generated mistake will be treated as evidence of a systemic flaw. Every autonomous system that fails will trigger calls to slow down, roll back, reconsider. The human equivalent — which happens far more often and far more severely — will keep being absorbed as normal, expected, just the cost of being alive.

It’s not fair. It’s also not going away. Anyone building or deploying autonomous systems needs to understand this. The bar isn’t “better than humans.” The bar is “nearly perfect, all the time, with no exceptions.” That’s a hard bar. But it’s the real one.

Meditating on a Decision Is Not the Same as Thinking About It

For most of my life, when someone said “I need to meditate on that,” I assumed it was just a fancy way of saying they needed more time to think. Same thing, different words.

It’s not the same thing at all.

Thinking is an active process. You weigh options, run scenarios, argue with yourself, look for evidence. It’s useful when there’s a right answer that logic can uncover. But a lot of the decisions we actually struggle with aren’t like that. Whether to leave a job, end a relationship, take a leap on something unproven — these aren’t math problems. They’re gut problems.

And your gut doesn’t respond well to being cross-examined.

That’s where meditation comes in — not as a productivity hack or a stress reliever, but as a way to go inward. To stop the noise long enough for something deeper to surface. The answers to those kinds of decisions are already inside you. They’re just buried under anxiety, second-guessing, and the pressure to be rational about everything.

When you meditate on a decision, you’re not analyzing it. You’re releasing it. You’re creating the quiet that lets the right answer come forward on its own.

There’s a reason people with great instincts — experienced leaders, wise elders, anyone you’d describe as having good judgment — tend not to rush their decisions. They sit with things. They let clarity come to them rather than hunting it down.

So the next time you’re stuck on something that isn’t a factual question — something that comes down to who you are and what you actually want — try not to think about it. Meditate on it instead. Get still, get quiet, and trust that what you need to know is already in there.

It usually is.

Not Coming Into the World, But Coming From It

We often hear people say, “When I came into this world,” or “When my child came into this world.” It sounds harmless, even poetic, but there’s something subtly off about it. That phrasing suggests separation, as if a person arrives from somewhere else and drops into existence like a visitor. But what if that’s not what’s really happening at all?

What if we don’t come into the world, but come from it?

Seen this way, a human being isn’t an outsider entering reality. A child isn’t a stranger arriving on Earth. They’re an expression of the Earth itself, the same way a wave is an expression of the ocean. The wave isn’t separate from the water, it’s something the water is doing for a moment. In the same sense, a person is something the universe is doing, right here, right now.

Trees don’t “come into” a forest. Leaves don’t arrive from elsewhere. They grow out of the same living system that sustains them. The soil, the rain, the sunlight, and time itself all collaborate to produce a leaf. Humans aren’t exempt from that process. We’re made of the same elements, shaped by the same forces, and sustained by the same networks of interdependence.

When you look at it this way, individuality doesn’t disappear, but it softens. You’re still you, with your own experiences, personality, and story. But underneath that, there’s a deeper continuity. Your breath is borrowed from the trees. Your body is recycled stardust. Your thoughts arise from a nervous system that evolved through millions of years of life responding to life.

This perspective changes how separation feels. The line between “me” and “the world” becomes thinner, more permeable. Harm done to the environment isn’t happening to something external; it’s happening to the larger system that expresses itself as us. Care, compassion, and responsibility stop being moral obligations and start feeling like natural responses.

It’s a quiet but profound shift. You’re not a visitor here. Your children aren’t visitors either. They are the world, unfolding in human form, just as waves unfold on the surface of the sea. And when you see it that way, existence feels less lonely, less fragmented, and a lot more meaningful.

Why AI Conversations Still Feel Clunky—and Why That Won’t Last

Right now, using AI tools like ChatGPT or Claude feels more awkward than it should. Every time you want to start a new task, you’re pushed into creating a new chat or thread, and then you’re left hoping the AI somehow remembers context from a previous conversation. Most of the time, it doesn’t. That’s not because the AI is “dumb,” but because the underlying system simply can’t carry everything forward forever.

The root of the problem is the context window. AI models can only “see” a limited amount of conversation at once. If you keep chatting endlessly, the system has to resend more and more text back to the servers every time you say something. That costs real money in compute and bandwidth. From the companies’ perspective, forcing new chats is a practical way to keep costs under control, even if it’s annoying for users.

This is also why so many current features—projects, custom GPTs, system prompts, memory tools—feel like they’re designed for power users or developers. It’s very similar to the early internet days. Back then, using the web meant dealing with clunky browsers, strange interfaces, and search engines like Mosaic or Lycos that required patience and technical curiosity. Normal people eventually came along, but not until the tooling matured.

We’re still in that early phase with AI. Even in late 2025, it’s only been a few years since these systems landed in the hands of the general public. What we’re using today is not the final form—it’s more like a prototype of what’s coming.

Long term, this whole “new chat, new thread” model is going to disappear. Instead of juggling conversations, everyone will likely have a single AI—more like a general contractor than a chatbot. You’ll talk to it continuously, mostly through voice. It will remember you, understand long-term context, and pull up information only when it needs to. It’ll edit things, fetch data, contact other people, or delegate tasks to other systems on your behalf.

When context windows become effectively infinite—or at least feel that way—the experience will stop being fragmented. Conversations will be ongoing and natural, not boxed into threads. What feels clunky and technical today will eventually feel invisible. And when that happens, AI won’t feel like a tool you “use” anymore—it’ll feel like something that’s just there, working quietly in the background of your life.

Life Begins Where Fear Ends

Life truly starts when fear stops running the show. It all comes down to your relationship with fear—whether you let it control you or whether you choose not to feed it. Fear only grows when you give it attention and energy. The moment you stop fueling it, it begins to lose its grip.

Most fear is rooted in things that haven’t happened yet. It lives in imagined futures, worst-case scenarios, and stories we tell ourselves about what might go wrong. But when you live in the present—the only moment you actually have—you realize something powerful. The present is a gift, and when you fully show up for it, the future has a way of taking care of itself.

Living in the present doesn’t mean doing nothing or being careless. It means giving your best effort right now, with what’s in front of you. Do your work honestly. Show up fully. Act with intention. That’s all you’re ever truly responsible for. Worrying about the future doesn’t improve it, but being fully engaged in the present often does.

Fear becomes dangerous when it paralyzes you. When you stop moving, stop trying, or stop believing in yourself, you guarantee the very outcome you’re afraid of. If you’re frozen today, tomorrow doesn’t magically improve. Momentum is created by action, not by overthinking.

Choosing faith over fear doesn’t mean you’ll never feel afraid. It means fear doesn’t get the final say. Faith is trusting that even if things don’t unfold perfectly, you’ll be able to handle whatever comes. It’s choosing belief over doubt and courage over hesitation.

And don’t forget to enjoy the journey. Life isn’t meant to be a constant battle against worry. Have fun along the way. Laugh, experiment, stumble, learn, and keep moving forward. When fear steps aside, life opens up—and that’s where growth, joy, and possibility begin.

Faith over fear. Always.

The Coming “Google Moment” for Artificial Intelligence

There’s a strong chance artificial intelligence is heading toward its own “Google moment.” Before Google, the internet already existed for decades. Researchers, academics, and technical users knew how powerful it was, but for everyday people, it was messy, confusing, and hard to navigate. Google didn’t invent the internet—it made it usable. Suddenly, anyone could find what they wanted in seconds. That single shift changed everything.

AI appears to be on a similar path. Right now, we’re in the early phase where the tools are impressive but fragmented. You have different apps, models, prompts, and workflows. But eventually, there will likely be one dominant, general AI interface that feels personal and universal. One AI per person. People won’t just say “I’m using AI.” They’ll say, “I’m talking to my Emma,” or “my Mike,” or “my Sarah.” That AI will have a consistent personality, memory, and context, and it will act as the main gateway to information, tasks, and decisions—just like Google became the gateway to the web.

That moment hasn’t fully arrived yet, but it feels close. Probably a few years out.

The first major inflection point already happened, though. In November 2022, ChatGPT was released, and AI suddenly captured the world’s attention. AI had existed for years before that. It was already being used in research labs, recommendation engines, and enterprise systems. What changed wasn’t the intelligence itself—it was the interface. Being able to simply chat with a powerful language model made AI accessible to everyone.

If you want an internet parallel, that moment was similar to when Netscape and Mozilla browsers appeared in the mid-1990s. The internet didn’t suddenly come into existence then. It just became visible, approachable, and useful to the general public. That visibility sparked mass adoption, huge interest, and eventually massive investment.

AI is now following that same trajectory. The chat interface unlocked curiosity, funding, and experimentation at a global scale. The next phase is refinement and consolidation—moving from many tools to one trusted, personal AI that feels as natural as opening a browser or typing into a search bar.

That’s the real shift ahead: not just smarter AI, but simpler, more human access to it.

The Future of Movies Belongs to Storytellers, Not Studios

The way we think about movies is about to change forever. For over a century, Hollywood has defined filmmaking with massive sets, expensive equipment, and armies of producers and directors. But the future won’t be about cameras or actors—it will be about words. Movies will be written into existence.

Imagine a novelist sitting down at their desk, crafting a narrative with the same care they’d put into a great book. That story then becomes the prompt for artificial intelligence to generate a full-length, feature-quality film. No sets to build, no crews to hire—just imagination translated into language. The better the narrative, the better the movie.

This shift makes English majors—or really, anyone who has mastered the art of language—suddenly the most valuable “producers” of tomorrow. It won’t matter if the story begins in English, Spanish, Mandarin, or Swahili. AI won’t just make the movie; it will instantly translate and localize it for audiences around the world. A viewer in Tokyo could watch the same story as someone in New York, each experiencing it naturally in their own language.

The expensive Hollywood machine will no longer be a barrier. What will matter most is imagination—the ability to turn visions into words that AI can transform into cinema. It’s not the end of movies; it’s the start of a new golden age of storytelling, where the best writers get to see their worlds brought to life on screen.

An amazing future is coming, and it belongs to storytellers.

How Personal AI Could “See” What You See

If we want AI to be a true sidekick—one that can assist, guide, and even anticipate our needs—it has to be able to see the world the way we do. Without that, it’s just a voice in your pocket, reacting to whatever you tell it rather than proactively helping. But making an AI “see” what you see isn’t as simple as sticking a phone camera in your shirt pocket.

Phones are bulky, awkward, and not designed for continuous outward-facing use. Most people don’t even have shirt pockets anymore, and even if they did, no one wants to look like they’re wearing a GoPro on their chest. Devices like the Humane AI Pin tried to solve this problem, but the market wasn’t impressed. It looked odd, was easy to forget, and never felt socially acceptable—three big strikes for everyday adoption.

The solution likely comes down to two more natural options:
• Smart Glasses with AR: Glasses already sit where your eyes are, so they’re the perfect place for outward-facing cameras. Combined with built-in microphones and speakers (possibly directional or bone-conduction speakers that only you can hear), they could feed your AI exactly what you’re seeing in real time. This also opens the door to augmented reality overlays—directions, reminders, translations—layered right onto your world.
• Camera-Enabled Earbuds: For people who don’t wear glasses or aren’t interested in AR, an alternative could be AirPods-style earbuds with tiny outward-facing cameras. They’d sit on top of your ear, providing a natural line of sight similar to your own. This option would be less conspicuous than glasses and still give your AI enough visual input to understand your environment.

Both approaches have challenges—privacy, battery life, and social acceptance among them—but they’re the most plausible paths to giving AI true “eyes.” Once solved, your AI could act less like a voice assistant and more like a real companion who understands your world as well as you do.

From Google to ChatGPT to Agents: The Next Leap in Virtual Assistance

From Google to ChatGPT to Agents: The Next Leap in Virtual Assistance

For years, Google has been our go-to for quick answers. You type in a question, it gives you links—or, increasingly, direct snippets. It was a huge shift in how we found information. Then came ChatGPT and similar tools, changing the game again by moving from simple retrieval to full-on conversation. Instead of just spitting out a list of websites, these systems can summarize, explain, and even adapt their tone to the user. It feels less like searching and more like talking.

But this is just the beginning. The next stage is already forming: agents. Unlike today’s chatbots, agents won’t just provide information or draft responses—they’ll act on your behalf. Imagine telling an agent, “Book me a trip to New York next weekend,” and it not only finds the best flights and hotels but also compares options, checks your calendar, reserves the bookings, and updates your itinerary. That’s far beyond what a search engine or text-based assistant can do.

The shift follows a clear progression. Google solved “finding things.” ChatGPT solved “explaining and creating things.” Agents will solve “doing things.” And once they can do things, the scope of what they handle will expand rapidly—from simple tasks like scheduling meetings to far more complex workflows, like managing finances, coordinating projects, or even running parts of a business.

What makes this inevitable is the blend of better AI models, access to more tools, and the growing comfort people have with delegating digital work. At first, it will be small, practical tasks. But just like how we went from looking up trivia on Google to using it as a backbone for everyday decisions, agents will grow from handy helpers to indispensable partners.

In short, what feels futuristic now will soon feel ordinary. First we searched, then we chatted, and soon, we’ll delegate. And that shift is going to change not just how we use technology, but how we live and work altogether.