Product School

AI Is Blurring the Line Between PMs and Engineers

Raza Habib photo

Raza Habib

CEO and Co-founder of Humanloop

February 18, 2025 - 9 min read

Updated: February 18, 2025- 9 min read

Last year, I was speaking to an engineering leader at a publicly traded technology company when she said something that really surprised me. I asked how important prompts were to AI applications. 

“Very,” she said, “they’re the core of the application.” 

“How do you handle the process of prompt engineering?" I asked. “Are you using notebooks? Versioning with git? Do prompts live in code, how do you do evaluation?”. 

Her response wasn’t what I expected. 

“No, no,” she said. “Engineers aren’t allowed to edit the prompts. It’s only the PMs and domain experts who do prompt engineering. They do it in a custom UI, and then the prompts are committed to the codebase.” 

AI applications are driven in a large part by prompt engineering and what this engineer was telling me was that at her company, prompts were not written by software engineers at all. Core parts of the logic and data that determined the character of applications serving millions of users were being written by PMs not engineers. 

Since then I’ve realized this is the start of an interesting trend. AI is blurring the line between Product Managers and engineers.

Prompts, Tools, and Knowledge Bases Are the Most Important Parts of AI Programs

Large Language Model (LLM) applications across a wide variety of use cases follow a small number of design patterns with very similar components. The simplest LLM applications are just the choice of a base model like GPT-4o or Claude with a prompt template. Prompt templates are strings with space for variables, like f-strings in Python. There have been successful applications built with no more complexity than this, for example, early copy-writing applications like CopyAI scaled to millions of dollars of revenue with this setup. 

Blog image: Humanloop image 1

The variable “blog” in the prompt template will be replaced with real content when the AI model is called. A RAG system is similar but first does information retrieval before 

More complex LLM applications use structures such as retrieval augmented generation (RAG) or agents. These applications start with the same base structure, prompt template, plus choice of model and augment it slightly. In the case of RAG, the main difference is that the variables in the prompt template are populated by performing information retrieval first. Agents are prompts called repeatedly in a while-loop where the LLMs have been augmented with “tools.” “Tools” for LLMs are really just APIs, and when an LLM wants to use a tool, it just outputs a string requesting you to call the API. Once the API has been called, the result is usually fed back into the model for its next decision.

Blog image: Humanloop image 2

More complex LLM applications augment a prompt template with tools, retrieval and memory.

AI applications span a wide range of use cases, and almost all of these programs follow the same structure. It’s not the code that changes the behavior of these applications but actually the prompts, choice of tools, and the base model. The same code with different prompts and models can create a chatbot for an AI tutor (Duolingo or Khan Academy), a legal assistant that negotiates contracts (Ironclad), a customer support assistant (Decagon) or an AI agent that can create accounting reports (Gusto). The code is not the most important part of an AI application: it's the prompts and tools.

If prompts matter more than code for AI applications, then it follows that the best prompt engineers will build the best AI applications. PMs and domain experts are usually better at prompting than software engineers so increasingly PMs will be driving AI success.

Prompting Is Here To Stay and PMs—Not Engineers—Are Going to Do It

Prompting will remain a vital part of AI applications for the foreseeable future. People have been predicting the end of prompt engineering almost since its start, but they are getting distracted by hacky tricks. 

In 2020, when GPT-3 first made people aware of the importance of LLMs, the models were bad at following instructions. They hadn’t been finetuned or refined with Reinforcement Learning from Human Feedback, so to get them to work you had to discover tricks or hacks to get good behavior. Examples of these tricks were asking the model to “think step by step” or telling the model it was a famous person. For example, you could improve GPT-3 to be better at solving coding problems by telling it “You’re are John Cormack”. These types of “prompt engineering” hacks will clearly go away as the models get smarter.

True prompt engineering is clearly defining what you want the model to do. The need to explain our goals will not diminish as the models get smarter. To see that this must be true, imagine you have an API to Einstein, a way to programmatically query a very smart person, and you want to use it to accomplish some task. You will still need to clearly specify what it is that you want him to do and the results will be ultimately limited by the quality of your instructions and the context you share through the API. It's true that in the future, the way we communicate with models will become richer as we share images, audio, and video but the need to clearly communicate our goals will not disappear just because AI gets smarter.

The people best placed to define the specification of what an AI product should do are product managers and domain experts. The role of the PM is already to understand the needs of the customer and distill them into a clear specification so that designers and engineers can implement it. It used to be the case that it was very hard for non-technical people to be involved in the actual implementation of software because they couldn’t code but prompt engineering changes this. By allowing non-technical people and domain experts to use English as the programming language, AI blurs the line between specification and implementation.

This isn’t just speculation, it's already happening at major technology companies. Through my work at Humanloop, I’ve helped dozens of companies build AI products, and it's now the norm, not the exception, that prompt and tool iteration is done by PMs or SMEs. At Duolingo it’s language learning specialists, at Gusto it's the CS team and product managers, at Vanta its security experts and at FIlevine it's lawyers.

As AI becomes more important in software products the most important skills become clear communication, understanding user needs, and writing well. Great PMs (and engineers) epitomize these skills.

AI Is Eating Software Engineering

Prompt engineering makes product managers more like engineers and AI assistants make engineers more like product managers. The success of Github Copilot and Cursor is the beginning of a trend in which engineers specify goals and the AI actually writes the code. Today only small amounts of boilerplate in popular languages are written by AI, but the trend is clearly towards AIs taking on more complex tasks. Devin is a first attempt in this direction and though the models aren’t good enough yet, they’re very rapidly improving. Some of the fastest growing startups in the world are tools like lovable, bolt, and V0 which let you describe a piece of software, and the AI attempts to write the whole application. Today they feel like toys, but many important tools felt like toys when they were first created. 

As AI models become able to write complex applications end-to-end, the work of an engineer will increasingly start to resemble that of a product manager. Software engineering today already involves many different tasks beyond literally writing code. Engineers are constantly using their judgment on the trade-offs between different approaches. They listen to users or others to understand the real needs of their programs and manage the expectations of their teammates. AI will eat the technical tasks first. The literal implementation of a feature is likely to be automated fast, but the social tasks and the judgment will take time to be automated if they are at all. The proportion of time an engineer spends on coding will likely go down and the proportion of time she spends on understanding user needs and shaping a product will go up.

AI is blurring the line between software engineers and product managers in both directions. Prompt engineering lets PMs get more deeply involved in actually building AI software rather than just defining requirements. AI coding tools let PMs prototype and are increasingly automating the parts of the engineer’s job that differ most from product design and management. Over time both roles will evolve into something new but the strong boundary we draw today between product and engineering will likely disappear, and that’s a good thing.

Prompt Engineering Requires New Types of Tooling

AI engineers today focus on wiring up applications in code, but much of the real iteration happens in defining prompts, selecting tools, and refining how data flows through the system. To do this well, teams need new kinds of tooling:

Evaluation frameworks to measure the impact of prompt and tool changes, ensuring improvements are data-driven.

User-friendly UI for non-technical domain experts, enabling PMs, linguists, customer support teams, and security specialists to contribute directly to AI product development.

Observability tools to track how data moves through the system, debug issues, and continuously refine model outputs.


Traditional IDEs and developer tools weren’t built for this kind of workflow, which is why platforms like Humanloop exist—to provide enterprises with the right tools to build and refine AI applications effectively. The best AI teams of the future won’t just have great engineers or great PMs—they’ll have people who can bridge the gap between the two. 

We’d love to show you how Humanloop can help you adopt this workflow. To find out more, book a demo here.

Updated: February 18, 2025

Subscribe to The Product Blog

Discover where Product is heading next

Share this post

By sharing your email, you agree to our Privacy Policy and Terms of Service