Stop Pretending. Start Owning It.
Why It’s Time to Be Honest About AI
Let’s cut the act — everyone’s using AI. Your favorite influencer, your boss, even that writer who swears they “never would.” Whether it’s for writing, brainstorming, coding, or editing, tools like ChatGPT have become digital co-pilots for students, CEOs, creatives, and campaign managers alike. The only difference? Some of us are honest about it.
But instead of transparency, we’re stuck in performance. People use AI behind closed doors and slap their name on the results like it came to them in a divine burst of genius. They use the tools, then condemn them. They rely on automation, then preach originality. We’ve created a culture where authenticity is more about appearance than honesty — and where ego has turned the AI conversation into a minefield of denial, shame, and hypocrisy.
I launched Jake’s Take because I got tired of the pretending. I’m not here to perform purity. I use AI — every piece you read on this blog was created with its help. That doesn’t make my thoughts less real. If anything, it makes the process more honest. And that’s the point of this essay: not just to call out the liars, but to invite all of us — writers, professionals, creatives — to finally own the tools we already use.
The future of thought isn’t human or machine. It’s human with machine. And it’s time we started saying so.
II. AI Is Already Embedded in Daily Life
You don’t have to be a tech CEO or a coder in Silicon Valley to be using AI. You already are. So is your professor. So is your boss. So is that wellness influencer posting perfectly worded captions under a sunset in Tulum.
Let’s start with the obvious: writing and communication. Students use ChatGPT to brainstorm essays, clean up grammar, or summarize long readings in seconds. Office workers plug their to-do lists into AI to generate emails, reports, and even performance reviews. Microsoft has baked AI into Word and Outlook. If you’ve ever clicked “suggest reply” in Gmail, congratulations — you’ve already used machine-generated language.
Then there’s marketing. AI writes Instagram captions, punchy email subject lines, A/B-tested ad copy, SEO-driven blog posts, and even entire brand campaigns. Some of the biggest agencies in the world rely on GPT-based tools like Jasper and Copy.ai — not as a backup plan, but as the first draft.
It’s in entertainment and content creation, too. TikTok creators use AI to generate script hooks, joke setups, storytime structure, and even their posting schedules. YouTubers use it for title testing and thumbnail analysis. Podcasters use it to generate show notes and transcripts. Journalists quietly use it for headlines and ledes. The Associated Press has used AI since 2014 to write financial summaries — and that was just the beginning.
Developers? They’ve embraced AI faster than anyone. GitHub Copilot, powered by OpenAI, now writes more than 40% of new code on the platform. AI helps debug faster, refactor cleaner, and complete projects in a fraction of the time. Junior coders are learning through machine-generated examples — and companies are fine with that. Productivity matters more than purity.
And let’s not forget the creatives. Need a baby name? A slogan? A wedding hashtag? A business tagline? AI’s writing it. Whether people admit it or not.
AI is not this faraway sci-fi fantasy. It’s in our browsers, our email clients, our content calendars, our codebases, our classrooms. It’s not a revolution waiting to happen — it’s one we’re already living in. And yet, we keep acting like it’s taboo.
This isn’t about the rise of the machines. It’s about the rise of dishonesty around them.
III. The Hypocrisy of Denial
Here’s where things get uncomfortable: the people using AI the most are often the loudest to condemn it. The creative director who trashes “robot writing” is the same one feeding prompts into ChatGPT to draft mood boards and brainstorm campaign names. The journalist tweeting about “human storytelling” has an AI-generated headline sitting in their inbox. The professor banning AI in class? They’re using it to write their recommendation letters.
This isn’t just ironic — it’s hypocritical.
Why? Because we’ve created a double standard. It’s fine to use Grammarly, Google, or a thesaurus. It’s fine to follow a writing template or get an editor’s help. But if you use ChatGPT? Suddenly it’s “inauthentic.” Somehow you’ve crossed a moral line. It’s treated like plagiarism, even though the thoughts are still yours — just guided, sparked, or cleaned up by a tool. Same as spellcheck. Same as Photoshop. Same as every other software we’ve normalized.
The truth is, most of the outrage isn’t about integrity. It’s about ego. People don’t want to share credit. They don’t want to admit that a machine helped make them faster, sharper, or clearer. They want to be seen as effortless geniuses, not co-authors. So they lie.
“I didn’t use AI.”
Translation: “I don’t want you to think I’m not a genius.”
This hypocrisy is especially loud in the world of influencers, thought leaders, and creators — the same people who build brands around honesty, vulnerability, and authenticity. But what’s more honest: writing with a machine and saying so, or doing it and pretending you didn’t?
It’s not AI that’s cheapening the culture — it’s the performative purity around it.
IV. Ego and Hypocrisy – The Psychology Behind the Lie
To really understand why people lie about using AI, you have to understand the ego. Not ego in the casual, cocky sense — but ego as the psychological construct of self-image. It’s the story we tell ourselves about who we are, what we’re worth, and what makes us special.
And for a lot of people, especially in creative or intellectual spaces, that story goes like this: I’m smart. I’m original. I don’t need help.
That story doesn’t leave a lot of room for AI.
So when people do use AI — for brainstorming, for writing, for ideation — the ego immediately feels threatened. The thought creeps in: Maybe I’m not as self-made as I want others to believe. That’s where the mental gymnastics begin. The denial. The rewriting of the origin story. The Instagram caption that says “I spent all day perfecting this,” when in reality, ChatGPT wrote it in 90 seconds and they spent 15 minutes editing.
This is what psychologists call cognitive dissonance — the mental tension that happens when reality contradicts our self-perception. To resolve it, we either adjust our identity... or lie.
And most people? They lie. That’s where hypocrisy comes in.
Hypocrisy is what happens when ego goes public. It’s not just that people quietly use AI while denying it — it’s that they shame others for doing the same. They project purity, bash automation, and praise authenticity while using the very tools they criticize. Not because they’re evil. But because their ego can’t afford to admit it relies on help.
Hypocrisy isn’t always malicious. Sometimes, it’s just insecurity with better lighting.
But here’s the kicker: the more we deny AI’s role in our work, the more we reinforce the idea that it should be hidden. That using it is something to be ashamed of. And that only “lesser” creators rely on it.
That lie is killing innovation. And it’s keeping people — especially those just starting out — from embracing the tools that could amplify their voices.
V. Why Owning AI Use Matters
So let’s say it plainly: using AI doesn’t make your work less valuable. Lying about it does.
When you pretend you didn’t use AI, you're not protecting your originality — you’re undermining your integrity. Because trust, especially in a world drowning in content, is everything. And trust is built on transparency.
Readers, clients, students, audiences — they’re not asking for superhumans. They’re asking for honesty. They want to know how things are made. They want to know what’s real. And in a culture where everyone is quietly using tools like ChatGPT, the only people building real trust are the ones willing to say so.
Being honest about your process doesn’t make you lazy. It makes you credible.
Owning your AI use doesn’t make your voice smaller. It makes your platform stronger. And the longer we pretend otherwise, the harder it gets to separate the real thinkers from the performers.
VI. Jake’s Take – Walking the Walk
This isn’t just theory for me. This is practice.
I use AI. Regularly. Proudly. Transparently.
Every blog post on Jake’s Take has been created with the help of ChatGPT — from early-stage outlining to SEO optimization, from rewording awkward transitions to brainstorming sharper titles. But here’s the catch: I don’t pretend it writes for me. It writes with me.
I still decide the direction. I still write the first thought. I still rewrite, revise, cut, question, shape. But I also let AI sharpen the edges. I use it like a writing partner who never gets tired, doesn’t care about credit, and always shows up.
Owning the role of AI in my work hasn’t weakened my voice. It’s made it clearer, faster, bolder. And, more importantly, honest.
VII. The Future Is Transparent
AI isn’t going away. It’s evolving — fast. GPT-5 is coming. Multimodal models are already reshaping how we work. Journalism, law, medicine, education — every industry is being rewritten in real time.
We don’t need fear. We need standards. AI literacy. Attribution frameworks. Cultural expectations around authorship. We need to normalize honesty about AI use the same way we normalized hyperlinks, emojis, and autofill.
The next generation of thinkers won’t be defined by who hides their tools best. They’ll be defined by how well they use them — and how openly they share their process.
The future of creativity is not human or machine. It’s human plus machine — with a layer of truth.
VIII. Conclusion – A Cultural Challenge
We’re long past the point of asking if people are using AI. They are. You are. I am. The real question is: Why are so many still pretending they aren’t?
This performative purity is exhausting — and worse, it’s dishonest. The lie isn’t just that people aren’t using AI. It’s that they’re better than the ones who admit they are.
That’s ego. That’s hypocrisy. And it’s stalling progress.
I started Jake’s Take to lead by example. To show that using AI doesn’t make your voice weaker — it makes your process stronger. It doesn’t replace your thinking — it expands it. And the only thing more dangerous than AI right now... is the lie that you’re not using it.
So here’s my challenge to you:
Be honest. Be proud. Own your tools. Share your process. Stop hiding behind the illusion of effortlessness. Stop letting your ego pretend it wrote the whole thing alone.
You don’t lose your humanity when you use AI — unless you lie about it.
And if we can build a culture where honesty is the new flex?
We might just save the soul of creativity in the process.