Influence = Value

Also: Humans as Wizards of Oz for AI products?

The most valuable products in the world are ones that influence decisions.

Google influences where you find answers to your questions.

Marketplaces (Amazon, Airbnb) influence how you spend your money.

Social media feeds (TikTok, Meta, LinkedIn) influence how you spend your attention.

It’s not just consumer products. The best enterprise products will provide data analytics1 to influence the business decisions you make and algorithms that prioritize who your sales team calls next.

This is also why so much money is being poured into AI tech. Some of it is AI for automation and productivity, yes, but a lot of it is fighting over the next frontier of products that can influence decisions.2

When I’m thinking about product strategy, I think about how we can position ourselves to influence the decisions of our users.

Here’s some questions to ask yourself:

  1. What are the decisions your users are trying to make?

  2. What is the common set of data that everyone will need to inform the decision?

  3. What criteria or preference set would separate what’s best for me from what’s best for you?

  4. How might we come to learn what your criteria or preference set are?

  5. How might we translate that into a ranked consideration set for decision making?

  6. How might we measure if we’re succeeding at influencing decisions?

  7. Do we have a sufficient supply of options to effectively influence a decision? How might we get or generate more?

This is hard stuff. People are really bad at knowing and describing what they care about.3 Some of it is obvious - only show me results that match hard filters, like # of bedrooms or distance from me. A lot is subjective or intangible - how does it make me feel?

Within the hard stuff is where we find big opportunity. When you are thinking about your product strategy, look for opportunities where your product can influence decisions, and start digging.

The Workshop

This is a newsletter-only section where I share a half-baked idea in hopes that y’all who are smarter than me can work it out with me.

Last time, I wrote about the implications of Reddit cutting off Bing and other non-Google search engine crawlers while they negotiate over how much money they will get paid for their content to be crawled and used as AI training data. Ben Thompson offered his take on this yesterday.

In other news, something I’ve written about a few times over the last 7 months is how LLMs, to me, feel like a UX technology, not a backend technology, because of the risk of hallucinations. In that, it’s a really powerful way for humans to interact with computers, but it’s not well suited to be the source of truth since it can literally make stuff up and not know it. (OpenAI obviously argues otherwise, and in cases where the cost of hallucination is low, I agree it’s fine.)

I’m more confident in conversational AI sitting on top of a more traditional backend service, powered by a database or even ML engine.

But maybe there are other solutions to the hallucination problems.

I was chatting with two co-founders yesterday, who are working on some cool AI to help people make decisions (apologies for the vagueness, I’m not sure how public they are), and they brought up the idea of a human expert on the backend, Wizard of Oz style, improving the quality of the AI output.

It reminded me of how transcription services have been offering this for a few years now - AI does a first pass, and a human checks the work of the AI to correct errors (and, by doing so, generate labeled data for future AI training).

It also reminded me of how TurboTax will, for a fee, happily connect you to a human tax expert. (Not because TurboTax is hallucinating, but because the stakes of doing something wrong feel high to the user.)

Anyway, this is all very half-baked, but it just got me thinking about the potential opportunity to have human experts, either behind the scenes or as a revenue-generating upsell, increase the quality and value of the solutions our products provide. It feels unsalable on its face but I wonder if there’s something there worth a longer look.

1  Although, really what we’re looking for is the analysis, not the analytics. Showing me a graph of data points over time is better than a raw data dump, but if you can also tell me what to takeaway from the data, you’re actually creating value.

2  Some of this is Google defending against ChatGPT (and a bunch of vertical specific similar solutions) as the place to get answers to questions. But I also think about Microsoft Copilot, all manner of “decision support” AI engines applied to different industries and disciplines, and obviously the OG of all of these, which is the machine learning-based algorithms that have been powering “Best Match” search results for the last 20 years.

3  I’ve talked about this a few times but it warrants mentioning again: there are often meaningful differences between people’s expressed preferences (what they say) and revealed preferences (what they actually do). Data based on what people do is much higher quality signal than what they say.

Reply

or to participate.