- Lessons Learned the Hard Way
- Posts
- Why player / coach is not a good idea
Why player / coach is not a good idea
Also: Multi-modal AI
Yesterday’s post revealed that the community is divided on whether Player / Coach is a bad or good idea. Let’s talk about it.
Here’s my case for why Player / Coach is a bad idea. I would love to hear your counterarguments in the comments.
Let’s start from first principles.
Managers exist to increase the total value of their direct reports by at least as much as the opportunity cost of hiring another IC.1
They do that by improving process, communicating and clarifying strategy, up leveling the craft of the team, managing the performance and career progression of the individuals, handling any personnel issues, and inspiring and motivating the team. Their vantage point creates opportunities for bottoms up innovation.
The skills required to do that well are not the skills one acquires as an IC. It’s also not a part-time job. Being an IC is also not a part-time job.
So a Player / Coach ends up being worse at both. They are probably brand new to management, so at a time when they are learning on the job2 and therefore everything takes longer to figure out and do well, they are being asked to fit it in alongside their IC work.
Their strength is likely in IC work, so the assumption is they can manage that part with above average efficiency and effectiveness. I’m not so sure. Yes, the trains run on time, features get shipped. But, like any full time job that is being forced into a part-time box, corners are getting cut.
It’s a recipe for underperforming and underdelivering as a manager and as an IC, or burn out. The collateral damage is the people reporting to the Player / Coach, and the unrealized value had there instead been a full-time manager or IC.
Some of you may point to situations where it does work. I would challenge you to ask if, in those situations, Product has strategic responsibilities or if they are simply overseeing delivery.3
The other argument I hear is that there’s benefit to managers being “hands on” and “close to the ground”. Yes, I agree. Good managers can engage at all altitudes, from high level strategy with leadership to delivery and data issues with engineers to direct exposure to customers. If your managers can’t do that, you don’t have good managers. The antidote isn’t player / coaches, it’s hiring better managers.
(Credit to Greg for helping shape my opinions on this.)
The Workshop
This is a newsletter-only section where I share a half-baked idea in hopes that y’all who are smarter than me can work it out with me.
Dan Hou had a really smart take that I’m still processing. I’ll quote it in full:
Most of the commentary about GPT-4o is missing the main point. Yes its faster, cheaper and feels like the movie Her. But there are much greater implications to consider.
Earlier this year, Yann LeCun, the head of AI at Meta, pointed out that biggest LLMs are trained on text. But text is an extremely low bandwidth way to learn how the world works compared to video. In fact, a 4 year old child will have seen 50x more data than our largest LLMs.
Up until a couple of weeks ago, even our most sophisticated models were built around text. For example, GPT-4 handled audio by transcribing it first into text, then applying reasoning to that text. GPT-4o, however, was designed to understand video and audio natively. That has implications for how much more data future versions can be trained on. Consider that a 4 year old will have experienced the equivalent of 16K hours of video. Compare that to YouTube, which alone contains over 150 million hours of video.
How much smarter can AI get? With a natively multi-modal architecture, I suspect the answer is much, much better.
I’m curious what y’all make of this?
Three things come to mind for me.
First, there is a large and growing dark side to humanity that LLMs are getting trained on, this bias (that already exists) is just going to get worse. It’s not the LLMs fault, they are simply a reflection of ourselves. But it’s scary to think about, when we also think about using LLMs in educational settings.
Second, video isn’t just communication content, it’s also a recording of physics, psychology, emotion, sociology. So it’s really interesting to think about how AI can start to — I won’t use the word understand — predict that balls thrown up will then fall down. Or that when a gunshot sound is made, people start running and look scared. Maybe AI already does this (please let me know!)
Third, Ben Thompson’s article yesterday (paywall, sorry) talked about how Microsoft is thinking about LLM models similar to computer processors, that people building software on top of LLMs should be able to do so against a stable API, even as the underlying models get more and more powerful (like how app developers don’t need to care much about whether you have an iPhone 14 or iPhone 16). In his keynote, Satya Nadella makes the point that Moore’s Law is being replaced by Scaling Laws for AI. Dan’s point helps me understand that even better.
1 If you don’t think the addition of a manager would deliver that, don’t hire a manager, hire another IC. I think some leaders hire middle management too early, either to solve a personal time management issue, or as a career progression reward for an up and coming IC.
2 Because there is no middle manager training school, for some reason. The military figured out the value of officer training schools a long time ago. MBA programs have followed the money and become feeder programs to consulting and finance (and, more recently, IC product management).
3 If the job of Product is “build the best version of someone else’s idea”, you’re really just doing project management with requirements gathering, taking on some of the load that is better shouldered by engineering and design. In those cases, sure, you probably have some extra time to do both. But if I was you, I’d worry about the day when leadership wakes up and realizes they are vastly overpaying for the value they are getting, and decides layoffs are the answer.
Reply