Something Big Is Happening… But It’s Complicated

A woman looking out a window contemplating the future

Matt Shumer’s recent article, “Something Big is Happening,” is getting beyond significant attention.

I’m shocked people are shocked by what he wrote.

And a bit concerned.

If you haven’t read the article, which is fairly lengthy, Shumer argues AI has hit a quiet, mostly undetected, tipping point where it can now completely disrupt white-collar jobs, which, in turn, would wreak havoc on the economy and life as we know it. Shumer compares this experience to early 2020, when most of society couldn’t fathom the impact COVID-19 would have, while some small circles could clearly see the devastation headed our way.

While I agree it’s crucial to understand what’s happening and that the technology needed to accomplish what Shumer describes already exists, Shumer’s position lacks real-world context and doesn’t account for the struggle of organizational adoption and change management, societal resistance, political and economic pressure, and challenges we simply can’t foresee yet.

I’ve followed the advancement of generative AI closely for the last several years while working in corporate America and having the blessing of participating in various global marketing communities, with daily conversations on the topic.

Here’s a balanced, measured take.

I Couldn’t Sleep Last Night

My first “Oh sh*t, this is really happening” moment came in October at MAICON 2025, a marketing AI conference in Cleveland.

It was my second year attending. The structure, speakers, and brilliance were very similar to 2024, but the vibe was completely different.

In 2024, the energy was electric. People were buzzing with curiosity and optimism. The conversations centered on possibility. What could we build? How could we expand our roles? How could AI help us do more creative, more strategic, and more meaningful work?

2025 felt different.

People were still excited to reconnect and learn, but underneath it, something had shifted. The conversations weren’t just about opportunity anymore. They were about survival. About relevance, efficiency, and what we needed to build, learn, or prove in order to justify our place inside our organizations.

The vibe had instinctively moved from offense to defense.

Renee McIntyre LinkedIn Post with AI Conference Reflection

By this point, CEOs were no longer hiding behind polite PR statements. They were openly admitting, and in some cases wearing like a badge of honor, their plans to slow hiring, reduce headcount, and become AI-first organizations.

It was one thing to hear the headlines individually.

But as I sat in Paul Roetzer’s keynote, and he flashed CEO quotes, one… after another… after another… 

IBM.

Amazon.

Walmart.

Ford.

Statements about hiring slowdowns, cutting the workforce, and becoming AI-first organizations working toward phasing out human roles where possible.

The list continued… The quotes and headlines went on for about 10 slides straight.

Seeing them stacked together made the pattern impossible to ignore.

Another moment stayed with me. Geoff Woods said something simple, but it made me feel incredibly unsettled:

I’ve always told myself that my identity isn’t tied to my career and that my family, relationships, and values come first. But lying in bed that night, I had to confront an uncomfortable truth. So much of my sense of worth has been tied to what I build, accomplish, and contribute through my work.

 If AI could do those things better than me, what would that mean?

Paul Roetzer’s keynote centered around AlphaGo’s famous Move 37 moment, when world champion Lee Sedol watched the AI make a move so creative, so unexpected, that it shattered his understanding of what machines were capable of. In that moment, he realized AI could surpass him at what he was THE BEST at.

I couldn’t help but wonder what my version of that moment would be. And whether it had already arrived.

I thought about who I was before my career defined so much of my daily life. The version of me who cooked just to make someone else’s day better. The version of me whose value wasn’t measured in output, productivity, or professional progress.

There was beauty in that person. Is she still here?

For the first time, I could clearly see a future where the role work plays in our identity begins to shift.

I thought about my children and the next generation entering a workforce that may look nothing like the one we inherited. About a society that isn’t openly discussing what happens if entry-level pathways disappear, and about leaders who are accelerating adoption without fully understanding the downstream consequences.

I felt fear. Pressure. Responsibility to understand what was happening, and to help others understand it too.

Shumer is right about one thing.

Something big is happening.

Not someday. Now.

But we need to be honest about where we are and realistic about when it might really happen.

This Didn’t Happen Overnight

What made the reaction to Shumer’s article so striking wasn’t what he said. It was how many people seemed to be hearing it for the first time.

This didn’t happen overnight.

For years, the people building these systems have been writing openly about what they were seeing. Not in viral headlines, but in essays, research papers, and interviews that most people never read. Dario Amodei, CEO of Anthropic, has described watching AI systems steadily become more capable with each generation in multiple publications over the years. In May 2024, his chief of staff wrote candidly about the possibility that many forms of knowledge work could eventually be automated and gave her perspective on “her last five years of work”. Others inside these labs have echoed similar observations.

At the same time, this conversation wasn’t confined to research circles. It’s been covered in mainstream media, discussed on podcasts, debated on conference stages, and referenced in boardrooms. CEOs have openly talked about efficiency gains and workforce reductions. Entire companies, entities, and educational programs have formed around helping businesses understand and adopt it.

None of this was hidden.

But for many people, it still felt abstract, easy to ignore, and easy to assume it was further away than it really was.

That’s beginning to change.

Not because the technology suddenly appeared, but because its implications are becoming harder to dismiss.

It’s Not Full Steam Ahead

Shumer is probably right about one thing: the technology needed to disrupt large portions of white-collar work already exists, or is very close.

What he doesn’t fully account for is the difference between something being technically possible and it actually being implemented at scale inside real organizations.

Just because AI can do something doesn’t mean companies can instantly reorganize around it.

He’s writing from inside a world where progress moves at lightning speed. Inside AI labs and Silicon Valley, people are immersed in this every day. They have the time, talent, and infrastructure to push these systems to their limits. From that vantage point, 12 to 18 months probably feels realistic.

But that’s not the world most of us operate in.

Here’s what I see.

Inside corporate America, even when leaders understand what’s possible, turning that possibility into reality takes time. It takes retraining teams, redesigning workflows, testing, breaking, and rebuilding. And most of us are doing that while still carrying full-time responsibilities that haven’t gone anywhere.

And even if you could, most employees wouldn’t know where to start. The majority of people still don’t understand the difference between the tools available, when to use which model, or how to push AI beyond basic tasks. Many people still treat AI as a glorified search engine and email assistant.

That’s not a criticism. It’s just reality.

Inside labs, progress is measured in months.

Inside organizations, meaningful change is typically measured in years.

There are legacy systems, budget cycles, compliance requirements, internal politics, risk tolerance, and human resistance; which is real, underestimated, and the hardest of all to overcome.

The capability is probably here, but the idea that white-collar work disappears in a year or two assumes a level of organizational agility that simply doesn’t exist at scale. A more realistic window for widespread restructuring feels closer to five years than one.

The disruption is real, the timeline is just more complicated than headlines suggest.

What’s certain is that this won’t unfold evenly. Some companies will move aggressively while others hesitate, and some individuals will adapt early, while others won’t see it coming until it directly affects them and it’s too late. 

Which camp do you want to be in??

You Ain’t Seen Nothing Yet

Make no mistake. I’m not suggesting you walk away from this article thinking we can relax. It’s quite the opposite.

Here’s what I see happening over and over again.

Someone tries AI, and it doesn’t give them exactly what they wanted the first time. They hit a wall and walk away thinking, “It’s not that good,” totally dismiss AI, and go back to their day, business as usual.

More often than not, the issue isn’t the tool… It’s the user.

AI isn’t plug-and-play magic. It takes time to understand how to prompt well, refine outputs, layer context, and iterate. It takes practice to move beyond surface-level use and into real leverage.

If you treat it like a search engine, it will behave like one. 

BUT …if you use it like a thought partner, it becomes something entirely different.

Most people just give up and assume it’s all hype, which is why there’s such a disconnect between what insiders say and what everyday people believe.

The More Eminent Disruption

Now here’s the part I think organizations are underestimating.

This isn’t just about one person using AI to outperform a department.

It’s about one person, or a very small team, being able to build an entire company from the ground up with far fewer people than ever before.

If you deeply understand your industry and you deeply understand how to leverage AI, you can: 

  • Develop strategy
  • Write code
  • Handle accounting
  • Create marketing assets
  • Analyze customer data
  • Build operational workflows
  • Launch products
  • Create an entire synthetic team of advisors 

…without hiring a massive team.

While large organizations are navigating the complexities we’ve already addressed above, a lean competitor can move fast, price lower, and iterate constantly. 

And it doesn’t require eliminating every white-collar job in 12 months to be disruptive.

It just requires enough capable builders to start undercutting existing companies.

If I were a business owner, this is what I’d be worried about today.

The Irony We’re Not Talking About

There’s another layer to all of this that’s hard to ignore.

Many of the same executives now expressing concern about potential economic disruption and workforce instability are also pushing aggressively toward automation and AI-first strategies.

To be clear, I understand why. 

Leaders are responsible for keeping companies competitive, and investors expect efficiency and the best possible financial performance. If AI can streamline operations and reduce costs, it would be irresponsible not to explore that.

This isn’t villainous behavior; it’s modern business and the reality of capitalism. BUT… when every company pulls that lever at the same time, the impact compounds beyond any one organization. What makes sense at the individual company level doesn’t translate cleanly at the societal level.

We can’t accelerate automation across industries and then act surprised when workforce instability follows.

This isn’t the time to point fingers or play the blame game, but we do need to be realistic about the importance of this moment. The question isn’t whether companies should adopt AI. They have to. But we need to come together and do it responsibly.

The goal can’t simply be short-term efficiency and cost reduction. Rather, it needs to be about how we redesign work in a way that amplifies human talent, creates new roles, and preserves long-term economic stability.

This requires more than individual companies optimizing for the next quarter. It takes society working together on coordination, foresight, policy, education reform, and leadership that looks beyond the immediate bottom line.

So What Should You Actually Do?

First, don’t panic. 

Second, DON’T IGNORE THIS.

Learn, understand, and plan.

One main point I absolutely agree with Shumer on is that you need to take AI seriously. Your future and (not to sound dramatic) the future of humanity depend on it.

1. Get a General Understanding of AI and Where We Are

There’s a lot you can read and watch right now. Here’s what has stood out to me.

If you take the time to go through these, you’ll start to see what’s possible, where we actually are, and how to think about using AI intentionally instead of reactively.

You’ll also start to understand who some of the key players are, how they think, and what they’re optimizing for.

Separately, I’d encourage you to learn about the leaders behind each lab and their long-term goals. It matters who wins this race. 

Here’s where I’d start:

AlphaGo– Available on YouTube

If you want to understand what it feels like when AI surpasses human mastery, watch this.

This documentary follows Demis Hassabis and the DeepMind team as they build the system that plays against world champion Lee Sedol in the game of Go. It’s wild to feel the psychological shift of witnessing the moment a human expert realizes a machine can generate something he could never imagine.

Co-Intelligence by Ethan Mollick

Ethan Mollick is one of the most practical voices in this space. He’s a Wharton professor who is an expert in business, education, and AI.

This book shows you how to actually work with AI as a partner in a way that is grounded, pragmatic, and empowering.

My Last Five Years of Work by Avital Balwit, Chief of Staff to Dario Amodei, CEO of Anthropic

Written from the perspective of Dario Amodei’s Chief of Staff, Avital Balwit shares what it feels like to be 25 and wondering whether she may be entering the last traditional chapter of her career (her last five years), not because she plans to step away, but because the work she does could eventually be done by AI. 

She wrestles openly with what that means, not just for her, but for all of us. If work changes dramatically or disappears in the form we’ve always known, how do we redefine purpose, fulfillment, and contribution as a society?

Note: this was written in May of 2024 😬

The Thinking Game – Available on Amazon Prime and YouTube TV

This one is so good, I had chills watching it.

I’m convinced Demis Hassabis is one of the most important, influential people of our lifetime. He is absolutely brilliant. He’s the founder of DeepMind, now part of Google, and one of the most important architects of modern AI. Watching how he thinks, leads, and approaches impossible problems gave me a completely different perspective on what’s unfolding and a new mindset on leadership.

You’ll come away understanding the tech and why it’s so important that we have the right people driving it for the right reasons. 

Machines of Loving Grace by Dario Amodei

This is the first in what I think of as a two-part, complementary set of essays, even though they were written at different times. Together, they feel like Dario Amodei laying out two very different potential futures so we can clearly see what’s at stake.

In “Machines of Loving Grace”, he writes from his perspective as CEO of Anthropic about what could be possible if powerful AI is developed and guided responsibly. It’s thoughtful and optimistic without being naive. He paints a picture of a world where AI accelerates scientific discovery, improves health, expands access to knowledge, and meaningfully raises the baseline of human well-being, if we get this right.

The Adolescence of Technology by Dario Amodei

This is the counterbalance.

In this essay, Amodei shifts the tone and acknowledges how unstable this phase really is. He talks about the seriousness of the technology we’re building, the speed at which it’s advancing, and the real risks that come with that power.

If “Machines of Loving Grace” shows the best-case scenario, “The Adolescence of Technology” reminds us what’s possible if we mismanage it.

Reading them together forces you to sit with both realities at once and, I hope, pushes you to pay closer attention to the role we each play in how this unfolds.

Beyond These Works…

I’d encourage you to spend time learning about the leaders behind each lab. It’s important to understand their incentives, values, and long-term goals. They vary greatly.

Whoever shapes this next chapter directly influences the future of society.

  • Demis Hassabis at DeepMind
  • Dario Amodei at Antrhopic
  • Mustafa Suleyman now at Microsoft
  • Sam Altman at OpenAI
  • Elon Must at xAI

2. Educate Yourself

Learn the basics of AI, then go deeper and learn how to use it as a true thought partner, not just a tool.

There’s endless content out there, but don’t stay at the surface. If you want a structured place to start, I highly recommend The Marketing Artificial Intelligence Institute. Their mission is to advance AI literacy for all, and they offer resources at every level, including free webinars to get you started, and they touch many industries – not just marketing.

3. Make a Plan

Anyone who tells you they know exactly how this all shakes out is lying to you. Nobody has a crystal ball to see the future.

There are too many variables, interdependencies, and incentives at play to predict a single outcome.

Once you’ve educated yourself, you’ll start to see possible paths more clearly. Think through what those paths could mean for your role, your industry, and your income. Have a primary plan. Have a backup plan. And yes, I agree with Shumer on this one: get your financial house in order. Having options matters in uncertain times.

4. Share This With Someone You Know

And have the courage to share it with someone you don’t – a line from a fitness instructor I like. 🙂

I’m not shocked by what Shumer wrote. I’m shocked that so many people are just now paying attention.

This didn’t happen overnight, and the next phase won’t either.

There’s more to say than could ever fit in one article. The implications for education, policy, economics, and identity deserve deeper conversations than we’re currently having.

This is a moment that requires participation, not passive observation.

Pay attention. Learn. Build thoughtfully. Push for responsible leadership.

The future isn’t written yet.

And it matters who helps shape it.

reneemcintyre.com

Recent Articles:

What My Sales Data Actually Said: Turning Notes Into Strategy With AI

It Won’t Always Be Like This.

Custom GPTs Explained: Build Smart AI Agents for Work and Life

Motivation’s Garbage. This Isn’t: PA Conference for Women Recap


    Discover more from Life Unfiltered

    Subscribe to get the latest posts sent to your email.

    Leave a Reply

    Discover more from Life Unfiltered

    Subscribe now to keep reading and get access to the full archive.

    Continue reading