For the last eight years, I’ve worked as a software engineer in consulting. I’ve worn every hat available. In consulting, success is rarely just about the code; it’s about how you communicate that code to clients, stakeholders, and your team.

While I continue to drive results as a full-stack consultant, my passion for innovation has led me to pursue AI Engineering in my personal time. I have been rigorously up-skilling in this domain to bridge the gap between traditional full-stack architecture and emerging intelligent systems.

As I spent hundreds of hours wrestling with LLMs (Large Language Models), refining prompts, and debugging “hallucinations,” I realized something strange.

The skills I was using to control the AI weren’t “coding” skills. They were management skills.

In fact, the way I talk to AI has started to fundamentally change how I think about talking to people. Here are three lessons from the console that I wish I could apply to the conference room.

 

1. Context is Currency (The Delivery Lead Mindset)

In the world of AI, we talk about “Context Windows.” If you don’t give the model the right background information, constraints, and goal, it will confidently give you the wrong answer.

As a Delivery Lead and mentor, I’ve realized this is exactly why human projects fail.

When I interact with an LLM, I have to be ruthlessly precise. I can’t just say “fix this code.” I have to say: “Act as a Senior Python Engineer. Review this function for memory leaks. Prioritize readability over brevity. Here is the surrounding architecture…”

This is delegation.

My experience managing teams helped me ramp up on AI quickly because I already knew how to prioritize work and emphasize critical actions. But the reverse is also true: AI has taught me that if I don’t get the result I want, the problem is usually my “prompt.”

If a Junior Developer (or an AI) goes down a rabbit hole, it’s usually because I didn’t set the boundaries of the sandbox clearly enough.

 

2. The “Reset Button” and the Sunk Cost Fallacy

We have all been there: A technical discussion that has spiraled. The team is arguing about semantics, context has been lost, and we are three layers deep in a solution that won’t work.

When this happens with an AI, I don’t argue. I don’t try to salvage the thread. I simply open a new chat window.

I provide a fresh explanation of the problem, list the symptoms I’ve seen so far, and strip away the noise of the previous failed attempts. 9 times out of 10, this “clean slate” solves the problem immediately.

Imagine if we could do this with humans.

In human interactions, we often fall victim to the Sunk Cost Fallacy. We keep arguing a point because we’ve already spent 30 minutes arguing it. We carry the emotional baggage of the last five minutes into the next five minutes.

While we can’t delete a colleague’s memory, there is a lesson here about the power of the “Hard Reset.” In meetings, we need the courage to say: “Let’s pause. Let’s pretend the last 20 minutes didn’t happen. If we started this problem fresh right now, what would we do?”

 

3. Radical Candor and the “Gut Check”

Perhaps the most liberating part of AI Engineering is the lack of ego.

I have zero anxiety about telling Claude, “That looks wrong,” or “Why did you do it that way? That seems like a bad implementation.”

Because I have 8 years of engineering experience, I often operate on instinct. I can look at a block of code and feel that it’s “off” without immediately knowing why. With an AI, I can voice that gut check immediately. I can be vulnerable enough to say, “I don’t know why this is wrong, but it feels wrong. What would a Senior Engineer do?”

With people, we filter. We worry about office politics, impostor syndrome, or hurting someone’s feelings. We hesitate to question a bad architecture because we can’t articulate the exact reason it’s bad yet.

Working with AI has highlighted the value of “Psychological Safety.” If we could strip away the ego in code reviews the way we do with LLMs, if we could openly question “gut feelings” without fear of judgment, we would ship better software, faster.

 

The Takeaway

We treat AI prompts with immense care, iterating on them until they are perfect. We treat the AI’s output with skepticism, ready to reset the context if it drifts.

Ironically, as we build more artificial intelligence, the key to success seems to be mastering the basics of human intelligence: clarity, the willingness to start over, and the courage to speak the truth.

 

Originally posted on LinkedIn.