· Brittany Ellich · reflection  · 12 min read

Living in the inflection point

I'm scared, I'm excited, and I'm exhausted by the pace of change. All of those things can be true at the same time. This blog post is a (hopefully) grounded take on living through AI's inflection point, why the backlash is valid, and why human connection matters more now than ever.

I'm scared, I'm excited, and I'm exhausted by the pace of change. All of those things can be true at the same time. This blog post is a (hopefully) grounded take on living through AI's inflection point, why the backlash is valid, and why human connection matters more now than ever.

Living in the Inflection Point

I’ve been trying to write this post for a while now, and I keep putting it off because I don’t think I have the right words for what I’m feeling. But I think that’s kind of the point. None of us do. We’re all just trying to make sense of a moment that doesn’t have a playbook.

Tom Dale put it better than I could:

I don’t know why this week became the tipping point, but nearly every software engineer I’ve talked to is experiencing some degree of mental health crisis… Many people assuming I meant job loss anxiety but that’s just one presentation. I’m seeing near-manic episodes triggered by watching software shift from scarce to abundant. Compulsive behaviors around agent usage. Dissociative awe at the temporal compression of change. It’s not fear necessarily — just the cognitive overload from living in an inflection point.

That resonated with me more than almost anything else I’ve read about AI. Not because I’m in crisis (or at least I don’t think I am), but because I recognize myself in the “cognitive overload” part. I think a lot of us do.

The prophecy that came true way too fast

Almost exactly a year ago, then GitHub CEO Thomas Dohmke published a post called “Scenes from an Agentic Life”. It reads like a fever dream where an unnamed agent, always referred to as “It”, runs your entire day. It shifts your meetings around. It handles the majority of your software development. It mitigates a DDoS attack while you’re walking your dog. It generates a custom Lord of the Rings game for you in two minutes flat. The whole thing is written in the second person, future tense: “You’ll live this day soon, in just a few years’ time.”

I remember reading it and talking to colleagues about how far-fetched it seemed. There was a bit of backlash, at least in the circles I was in. Part of it was the classic reaction of everyday software engineers seeing yet another C-suite exec pushing AI hype… “look at this guy trying to shove AI down our throats again.” And part of it was that the future described in that post genuinely felt like science fiction. A distant dream at best.

That was a year ago. And now? Nearly everything in that post is possible today. Not theoretical, not “in a few years.” Right now. That shift from “if” to “when” happened so fast that I think a lot of us are still catching up emotionally. Humans aren’t really built to have the entire technology landscape change so drastically in such a short amount of time.

Why it’s scary

I think it’s scary for a lot of reasons, and I want to be honest about them instead of wrapping uncertainty in toxic optimism.

It’s scary because I don’t know what the future of my own job looks like. A skillset that was previously highly valued is now much easier to replicate with AI tools. I try to be pragmatic and I don’t think engineers are going to be directly replaced tomorrow. But I think it’s likely there will be fewer of those jobs available over time. I think I’m at a seniority and capability level where I’ll probably be okay, at least I hope, but it’s sad to think about what this does to folks early in their career and how this may turn people away from the field entirely. A field that still desperately needs more folks from diverse backgrounds, opinions, and experiences is rapidly becoming more difficult to break into, and any playbook that previously existed to get into tech is changing.

It’s also scary because of what it means for my kids. I have three little humans whose futures are a big black box right now. I have no idea what the job market looks like in fifteen years, what skills will matter, what won’t. That uncertainty is hard to sit with as a parent, despite the fact that it is many years still until I truly need to worry about it.

And then there’s the software reliability question. It feels like there are more software outages recently, more things breaking in unexpected ways. The software dependency world is incredibly interconnected and it’s easy for seemingly small issues to cascade and impact a lot of people and systems. It could be that I’m attributing excess outages to AI unfairly, and it’s also possible I’m just making up the fact that outages seem more common. Maybe they aren’t more common at all! But it’s hard for me to imagine that they aren’t at least somewhat related to changes made by AI that might not be fully understood by software engineers, and when I use my systems-thinking hat to think about all of the things that are dependent on software running well, it is scary to think about the potential implications of cascading failures.

The backlash is valid

There’s a significant proportion of folks who are strongly against AI, and I want to be really clear: that is a completely valid response to your entire world changing. I don’t blame anyone for it.

I think a lot of the backlash is rooted in how AI was introduced to the world, which I wrote about previously. AI arrived on the tail end of the “Web 3.0” movement with blockchain, cryptocurrency, NFTs, the metaverse, etc., where the tech industry was loudly touting a future that didn’t pan out and nobody actually wanted. People were exhausted by the hype cycle. So when AI showed up with equally grandiose claims, a lot of people understandably said “here we go again.” The boy who cried wolf had already cried wolf many times, and this time the wolf might actually be real, but people were tired of reacting.

On top of that, AI was rolled out with apocalyptic messaging on one side (“this will exterminate humanity”) and mandatory adoption mandates on the other (“use AI or else”), all while people were dealing with layoffs and economic turbulence. Combine that with the environmental and socio-economic impact of recent AI investments and I think it created the perfect conditions for resentment.

All of that said, I want to gently caution against letting the backlash calcify into outright dismissal. Not because the feelings behind it aren’t valid, because they absolutely are. But because I’ve watched this play out in real time, and think that if you’ve been standing back because you are skeptical, it’s now time to jump in. My own way of coping with uncertainty is to become almost obsessive around learning as much as I possibly can about the thing that’s uncertain. That’s what drew me into AI in the first place. It wasn’t just enthusiasm for the new tech and capabilities… it was also anxiety. I didn’t want to get left behind or be out of a job if AI became the predominant way software gets made. And because I’ve been learning deeply for the past year, I feel like I’m in a better position to handle what’s coming than I would be if I’d spent that year dismissing it. I’m not saying my approach is the only valid one, but I do think the folks who are actively engaging with these tools, even skeptically, are going to have a smoother transition than those who are burying their heads in the sand and refusing to look.

Learning as a coping mechanism

I think a lot of my interest in AI was primarily driven by not wanting to get left out or be out of a job if AI becomes the predominant means of making software. I think this past month or so I’ve come to the realization that it is in fact a “when” and not an “if”. The models have improved to a point where to be honest, they’re just as good as I am at coding, and can do it in a fraction of the time it takes me. That is pretty hard to grapple with. It is both incredibly cool from the technology side, and incredibly terrifying from the human side.

My way of dealing with uncertainty is to learn. That’s always been my pattern. When something is uncertain and scary, I dive into it headfirst. I read everything. I try every tool. I build things. I break things. I talk to people who know more than I do. It doesn’t make the uncertainty go away, but it makes me feel like I have some agency in the face of it.

I know that’s not everyone’s approach, and that’s okay. Different people deal with change differently. Some people need to step back before they can engage. Some people need to grieve what’s being lost before they can see what might be gained. There’s no wrong way to process a paradigm shift. But if you’re someone who’s been avoiding AI out of frustration or resentment or fear, I’d gently encourage you to look at this inflection point in model improvement as a good time to get into it. The change is happening now. It’s real. Understanding it gives you more options, not fewer. To put this in a heavily overused business metaphor, no one wants to be Blockbuster when the world is switching to Netflix.

The rising importance of glue work

Something that I didn’t expect was that the most important skill in this new world might not be technical at all.

There’s a post from zeu.dev on Bluesky that really stuck with me:

when code and commits are now easier than ever to do with AI, they are not great indicators of participation anymore and human community interactions that move a project forward like issues and comments are more important than ever.

That hit hard. Because what zeu is describing is glue work: the kind of work that Tanya Reilly described in her incredibly influential talk and post, “Being Glue”. Glue work is the stuff that makes teams actually function: noticing when people are blocked and unblocking them, onboarding new team members, setting up processes, asking the right questions in design reviews, making sure everyone’s going in roughly the same direction. Reilly’s central argument is that this work is critical to team success but historically hasn’t been rewarded or promoted the same way that writing code has. People who were great at glue work were often told they weren’t “technical enough” and were pushed toward project management roles they didn’t necessarily want.

But now, when AI can write a significant amount of the code, the people who are really good at the human stuff like coordination, communication, asking the right questions, knowing what to build and why… those people are thriving. And there’s an important nuance here: AI is good at coding, but the process of knowing what code to write is still not completely figured out. When I ask Copilot to consider how to handle things like security and scalability, it does a good job at suggesting and implementing solutions, but I usually have to ask about it first. The judgment about what to build, what to prioritize, what questions to even ask is still fundamentally human work.

Glue work, which was previously viewed as critical but not rewarded, is becoming the most critical skill. The ability to delegate effectively to AI agents, to review their output with good judgment, to connect the dots between teams and systems and people… that’s the whole job now. The folks who spent their careers doing the unglamorous work of making teams successful are suddenly the most valuable people in the room.

I find that deeply ironic and also kind of beautiful.

The unexpected upside: community

One thing that overindexing on AI experimentation has enabled for me is that I feel more connected to other people than I’ve felt since smartphones existed.

Yes, the majority of that connection is online. But I feel like for the first time in a long time, (really in my entire adult life given when smartphones came into the picture) I’m doing a much better job at keeping up with relationships, and a lot of that can be attributed to AI usage. I’m less cognitively taxed at the end of the day. I have more mental space for actual human interaction. I’m spending more time in the ATProto developers community and making real friends on Bluesky where I hadn’t used similar microblogging platforms previously (once social media adds ads it becomes a slow descent into enshittification, but that’s a whole other conversation). I’ve been going to conferences and learning from others in a way I wasn’t able to do before AI was handling some of my cognitive load at work. And it is incredibly fulfilling.

I really think that human connection is more important than it has ever been before.

When the tools can write the code, the differentiator is the human stuff. The relationships, the community, the ability to see what needs to be done and rally people around it. The conversations that happen in the margins. The trust that gets built over shared work and shared vulnerability.

This is what I keep coming back to. In the middle of all the uncertainty and fear and cognitive overload of living through an inflection point, the thing that has grounded me the most is other people.

I don’t know what happens next. I don’t think anyone does, and I’m skeptical of anyone who claims to. I wrote this because I needed to say it out loud: that I’m scared, that I’m excited, that I’m exhausted by the pace of change, and that all of those feelings are true at the same time. Despite the fact that the robots are now writing most of the code, building relationships with other humans during this inflection point now feels like the most important work there is.

Share:

0 Likes on Bluesky

Likes:

  • Oh no, no likes, how sad! How about you add one?
Like this post on Bluesky to see your face show up here

Comments:

  • Oh no, no comments, how sad! How about you add one?
Comment on Bluesky to see your comment show up here
Back to Blog

Related Posts

View All Posts »
I guess I'm AI-pilled now?

I guess I'm AI-pilled now?

I went from brain dump to a working productivity tool in a single day. Here's how listening to the How I AI podcast pushed me to finally experiment with personalized software, MCP, agents, and skills—and why I think it's time to get on board (with some caveats).

AI Has an Image Problem

AI Has an Image Problem

I spent 2025 going from skeptical to genuinely excited about AI tools. My non-tech friends and family spent 2025 learning to hate them. The AI industry has fumbled this introduction so badly that we've turned a useful set of tools into a cultural flashpoint - but the damage isn't irreversible.

2025 in review

2025 in review

2025 was my year of doing ALL the things - speaking at 5+ conferences, starting a podcast, shipping side projects, and somehow not completely burning out. I learned that momentum creates more momentum, perfectionism is overrated, and seeing people in real life again after years of isolation is actually really, really good.

Build the thing you wish to see in the world

Build the thing you wish to see in the world

For most of my career, I've been confusing building products with building businesses—and that confusion kept me from pursuing a lot of ideas. Two weeks off helped me realize that not everything needs to be a startup, and some of the best things we build are the ones we build just because we want them to exist.