Your approach to generative AI is wrong

Over the last few weeks at Hospitable I've been performing a workshop with the engineering teams, an effort designed to help them to scale their productivity as Product Engineers, with a focus on using AI.

It's led me down some thought rabbit holes and my working theory today is that for a lot of people there is a disconnect in how we approach generative AI, in software engineering.

We will happily spend months mastering a new programming language, encountering bugs and embracing the learning curve, yet with generative AI, we expect immediate results. We're treating a revolutionary technology with less patience than we'd give to learning something like Go or Ruby.

The impatience seems logical. We're communicating with AI in natural language - the same way we've expressed ourselves since we were children. But this surface-level familiarity masks something deeper; prompting AI effectively is its own form of syntax, with rules, patterns, and best practices that require practice.

Consider how we might approach learning a new programming language. We start with some basic syntax, progressing to understand its paradigms, and discover its strengths and limitations. We expect to write mediocre code initially, something breaks, and then we learn from our mistakes. Yet with AI, many programmers skip this learning phase entirely, expecting flawless output from their first prompt - because they wrote in their natural language.

Let's take JavaScript's quirks as an example - like type coercion or asynchronous programming. When learning about these potentially new paradigms we wouldn't abandon the language. Instead, we learn its nuance, understand its bounds, and develop strategies to work effectively within its constraints. Similarly, AI requires us to understand its "type system" equivalent - what kinds of prompts work best, how to structure our requests, and when to use different approaches.

This difference in expectation and reality stems from a fundamental misconception. We've been sold on AI as this magical solution, when it's really more like a powerful but complex programming language. Just as we wouldn't expect to master Postgres without understanding its query language, we shouldn't expect to master AI without learning its prompt engineering principles.

The most successful engineers I've see in the wild are treating AI like any other tool in their arsenal. They maintain prompts like they would code snippets. They are aware of successful patterns and failed attempts. They approach edge cases methodically, testing different prompt structures to understand what works and why. This isn't different from how we debug code or optimize our database queries.

For me, engineering leaders have a crucial role here. They need to foster environments where AI is treated as a skill to be developed, not just a button to be pressed. This means enabling teams to experiment, share findings, and build institutional knowledge about effective AI interactions.

The path forward isn't about lowering our expectations for AI; it's about raising our commitment to understanding it properly. Just as we wouldn't judge a programming language only on our first attempts to use it, we shouldn't judge AI by our initial, garbage prompts.

Let's shift our mindset from expecting magic to embracing mastery. By approaching AI with the same learning processes we apply to any new technology, we'll unlock its true potential. After all, the best engineers aren't those who expect perfection - they're those who perfect their ability to leverage the tools around them.