What Generative AI Means for Product Strategy and How to Evaluate It
Looking back over the past 10 years, we can see a junkyard of technologies that initially received huge hype but haven’t become — and might never become — “a thing.” Whether they failed to cross the chasm and were ultimately abandoned or they’re limping along while adoption lags, you can probably visualize exactly what I am talking about.
And yet, it seems every time a new hype cycle starts, there are people in large companies everywhere who start screaming that their own products or services will be left behind and their organizations will be doomed if they don’t embrace the wave.
Years on, for example, can you imagine what a complete flop your product would have been if you had gone all in on experiences in virtual reality (VR), issuing NFTs (non-fungible tokens) for the most inconsequential things or even fully integrated Bitcoin into your checkout experience?
While it would be easy to mentally put generative AI in this pile of overhyped technologies, whether because you view it as annoying, distracting or even anxiety inducing, doing so would be foolish.
Pandora’s box is open. Anyone with an hour of time, Google and some curiosity can see that this new generative AI technology is:
- Being rapidly adopted
- Applicable to most digital interactions
Changing Technology, Constant Principles
In this way, generative AI is much like the tablet, smartphone, telephone and PC before it. Our underlying technologies are constantly changing, while the underlying principles of users and their problems remain the same.
Innovation comes from the discovery of user problems that users themselves find important, and then solving those problems in faster, cheaper and more effortless ways. The core problem of getting from A to B was solved better by the car than the horse, but the problem remained the same.
What this means for generative AI is that the paradigm of problems that can and/or should be solved needs to shift. What was previously so difficult or unaffordable that it didn’t make sense to even consider can now be done through an API in a couple of minutes.
All of a sudden, the painful problems that have historically been too hard to solve require your attention.
Even though, as established, generative AI is real and already here, there are probably loud voices from the top to the bottom of your own organization claiming that either you need AI in everything everywhere all at once or that you should be ignoring it because “it’s a distraction.”
As with all new product initiatives, be careful. Reacting impulsively to these voices can lead you to miss the core strategic process that we at VMware Tanzu Labs believe all product teams should be going through.
To make sure you don’t ship dud projects by overvaluing the importance of AI, focus on the principles and maintain expectations that you are experimenting with AI. After all, your product still might make it to market too early, or some unforeseeable circumstance can lead to a failure.
Here’s a theoretical example:
We are sure there are loud voices inside of the blogging CMS platforms screaming that they need to add one-click language models to an author’s blog archive such that any blog reader can “chat” with a blog.
But does that make any sense? Building such a feature would surely require a decent time and attention investment, but what will likely happen when shipped? Our bet: crickets.
Do users primarily read personal blogs for specific technical answers? Likely not. Users will read the blogs of people they admire, yes, to learn, but also to understand the author’s perspective, to be entertained by their curation of ideas and opinions and to stay up to date with recent developments. The user likely doesn’t know what to prompt an AI about in a chat interface, nor do they necessarily care to.
Now, would a technical blog on the mechanical intricacies of cars create value in providing a chat interface? It is more than likely, as we would bet most visitors come to this type of blog looking for specific answers to specific questions, questions that could be asked to a chat interface.
If you ship the former rather than the latter because you feel pressured to do something with AI, you’ve ignored the underlying principles and shipped a dud. The quieting of voices from delivering what was asked will only be temporary before you’re on the spot for being so reactive with your road map.
Model for AI Strategy
The first step of the model for reviewing or building your AI product strategy is completely non-negotiable. You need to know your users and understand their problems. Going with your gut on what you think you know without having done a store visit or a user interview in a couple quarters will lead to failure. Be honest with yourself, act and continue.
Once you have this knowledge, you need to explore and document the jobs to be done (JTBD) for your customer. We’re not necessarily advocating you go all in on JTBD if it is not native to your existing organization, but questioning and documenting why users have chosen you and to get what done is fundamental here.
With an outline on what you know your customer wants you to do for them, you need to evaluate how well your product or organization is performing in completing this job or task, and by what measure.
And the final step is to understand whether AI can increase this performance or render the problem or job completely irrelevant.
An example of the final step would be: Can you use AI to better deliver a single sign-on experience? Or can you use AI through a user’s webcam to continually authorize access to applications (think always-on Face ID) and remove the need for SSO at all?
For many organizations, if AI is to render the problem or job completely irrelevant, they will not seek to be the replacement, as it can be a hot-button political issue — a Kodak moment, if you will. Now is the time to avoid disaster and raise this issue if you want to continue innovating.
If it seems that generative AI can enhance how your product or organization completes the job for the user, a product leader needs to be asking, “Can AI impact this measure in a material way?” The critical element of materiality comes from understanding by what measure a user is judging success with your product.
Granularity of Evaluation
Up until this point, we have been intentionally vague as to what the product or size of product is that you are evaluating the strategy for.
That’s because product managers all the way up to the C-suite need to be working on the AI strategy for what is under their purview, whether that be a customer app or a whole swath of departments. Many products that PMs work on will be made completely irrelevant by a higher-up strategy to replace a department.
Again, although this may be an uncomfortable topic to think about, the market is moving and, as practitioners, we have a duty to consider the impact of AI.
The evaluation levels likely look like this:
- Why do you exist?
- What is the core outcome you help your customers achieve?
- Why does this department exist?
- How does this department interact with the user/brand journey?
- What is this team working on?
- For which user?
- For which outcome?
- What does this product enable the user to achieve?
- Is the problem this product solves caused by some other part of the user/brand journey we control?
What we believe will come out of this exercise is that parts of an organization that touch the user journey are ripe for internally generated AI features, products and experiences, and the parts of an organization that do not touch the user will be most disrupted by third-party AI tools in the future.
Learn more about cutting through the noise surrounding generative AI and make the most out of the possibilities available to you right now in our upcoming webinar, “Generative AI 101: The Realities of Generative AI and What Business Leaders Need to Know.”