I had an interesting conversation over a few beers with a fellow Product Manager recently at a ProductCampATL/ProductCoffee event. To set the stage, I had already informed those present that I was currently looking for employment (and several others at the table were in the same boat). We got around what each was looking for in a new job and I asked the person sitting across from me about what he did (he was still employed). His reply was "Oh I've been teaching AI how to replace me as a Product Manager." - seems he's been setting up routine parameters for the writing of user stories at his current job and it was his opinion that he was becoming obsolete. We discussed this for a bit - while I had played around with AI like ChatGPT, I was under the misconception that it was closer to earlier ML constructs - he informed me that the Large Language Model (LLM) had come so far that it was pretty amazing and far superior to what I had sampled in the past.
This got me thinking about what he said so I did a bit of testing/research. The result is a natural progression (I'm using ChatGPT as an example but it can be any similar like GoogleBard):
- You ask ChatGPT to define a list of User Stories - asking progressive questions and defining routines until the data sources, parameters, etc fit what you're looking for. ChatGPT produces those stories. Note that you can build up your routines through interaction and get a refined result set.
- You then ask ChatGPT to take those stories and write the code to produce the desired results - tell it the code base and it searched public git repos to program.
- You then ask ChatGPT to look for exploits in the code, which are applied to the revised code
- The code is put into a test environment for you to view and use.
- You can then produce any revised graphics for your program with parameters for style, voice, ease-of-use, etc using ChatGPT or another AI image generator.
- Once you like the results and you've made tweaks, you've launched something new, with relatively little human interaction.
So basically, you've not only replaced the Product Owner, but also UI, Engineering, Quality Control, potentially PIN testing and at some point, probably the entire DevOps process (I left that out as I'm not sure if that's currently covered). This scenario may seem far fetched, but consider that we're only in nascent times when it comes to Large Language Model AIs - this is only the beginning.
So does that mean we're all out of a job and what can we do? That's the $1M question.
Putting aside all the other tech and ops roles, the first thing that came to mind, is that we as Product Managers have a rather complex role of in-between-ness - We help to corral the needs of various stakeholders and help to prioritize how they are injected into the development process. This starts at the portfolio initiative level with strategic planning and cascades down towards tactical execution, which has it's own segmentation: negotiating with stakeholders what gets built, when and in what order. During this process there's fulfilling the needs of sales and marketing to understand what's being delivered and to help define stakeholder benefit. Now one can argue that an AI can replace the prioritization, negotiation and launch tasks and I believe that's true for something really small scale (like if you're a team of 5 in a startup wow), but in any really complex environment there are lots of humans that would need to be convinced of AI's infallibility. Also, while it would seem that AI would be better at complexity, I think at least initially the more complex the construct, the harder AI will have to determine a best course of action (this will change over time, especially with the use of multiple, more specialized AI).
The next thing I thought about is Evolution.
As scenarios are built in AI, doesn't the AI learn what's being asked? Doesn't that mean that any IP you own is shared as examples with everyone else that's also using the same AI? First, does it matter and second, what are the consequences?
To answer this I asked ChatGPT a a few pretty basic questions:
I. "How can the information generated by AI like ChatGPT negatively influence future queries?"
While AI-generated information like ChatGPT can have many benefits, there are also potential negative impacts on future queries. Here are a few examples:
Biases: AI systems are only as good as the data they are trained on, and if the training data is biased or incomplete, the resulting answers may also be biased or incomplete. This can lead to inaccurate or incomplete information being presented to users, which can have negative consequences.
Misinformation: Like any source of information, AI-generated information can be subject to misinformation and fake news. If an AI system is trained on incorrect or misleading data, it may generate answers that perpetuate that misinformation.
Lack of context: AI-generated information may lack the context that humans are able to provide. This can lead to misunderstandings or misinterpretations of the information being presented.
Over-reliance on AI: If users become too reliant on AI-generated information, they may be less likely to critically evaluate the information they are presented with. This can lead to a lack of critical thinking skills and an inability to make informed decisions based on the information at hand.
Dependence on technology: If AI systems become the primary source of information for users, there is a risk that users may become overly dependent on technology and lose the ability to think for themselves or access information from other sources.
II. Can the information generated by AI like ChatGPT negatively influence any IP holders of information due to the sharing and learning aspects?
The information generated by AI like ChatGPT can potentially negatively influence IP holders of information if the AI system is trained on copyrighted or proprietary information without permission from the IP holders. This can lead to the unauthorized use and dissemination of protected information, which can result in legal action against the AI system and its creators.
In addition, the learning and sharing aspects of AI systems can also pose a risk to IP holders. If an AI system is trained on a large dataset that includes copyrighted or proprietary information, there is a risk that the system will learn and replicate that information without permission. This can lead to the unauthorized use and dissemination of protected information, which can have negative consequences for the IP holder.
To address these issues, it is important for AI developers and users to be aware of copyright and IP laws, and to obtain proper permissions before using or training an AI system on protected information. This can help to ensure that the rights of IP holders are respected, while also allowing for the development and use of innovative AI technologies.
So at this point it sounds like there's some risk to completely using AI for the creation of working software, but why would you think otherwise? After all, we're only in the very beginning to this technology. So what does this all mean and how will it impact us, as Product Managers when it comes to our career?
So now, my opinions (in somewhat logical order):
- So I've played around a bit with ChatGPT and while much of the response is good, it seems I sometimes get some crazy answers? This is one of the downsides of using an LLM AI - for it to work with any degree of real success the data pool has to be extremely large - there's also a lot of costs involved in maintaining what's needed to support all the computational infrastructure. That means that small companies wanting to develop their own AI have a huge cliff to climb as a barrier to entry. Fortunately most of the current projects are funded by large corporate entities and/or public funding, with deep pockets - what we're left with is a somewhat "black box" input query that you can use to type in routine questions in parameters, mostly for free. Sounds cool right? Most of these same LLM AIs are pulling from the same publicly available web data stores - that means that while much of the data is good, there's also a lot of crap there that can influence what is produced as an answer.
- Can AI replace all the people involved in software delivery? The answer is yes. It may not be perfect right now but give it a few iterations and it can completely replace all of us. Of course the 80/20 rule applies - in this case it has more to do with complexity and especially business complexity (few businesses can very simply define all the nuances involved in running system applications and when they do, there's often human decision making involved that may be difficult to replace with an AI). For very simple problem products, anyone with some technical experience can produce working software using AI. Heck, even someone without the experience, provided they us AI effectively, can simple ask AI to come up with the best method and apply it as an execution roadmap (I think success would be dependent on the individual(s) critical thinking skills). Where the 80/20 rule comes in comes back to complexity. Few problems are very simple and the order of complexity can cause clashes within what it created by AI - those conflicts are much harder to resolve as they often need a lot of human input (of course this could change - in the future we may use AI as the arbiter of any conflicts and this may become part of the process - but for now the scenario stands).
- I don't understand AI at all - it seems like a toy? Sounds like you haven't been paying attention - that's okay - we're all busy folk trying to do our jobs. You may not see the outcome of AI revolution until you need to look for your next job. If you haven't yet, go to chat.OpenAi.com and create a free account for ChatGPT. Throw in some prompts and see what happens. While it may seem a bit silly now, the next time you're tasked to come up with some basic software design input the parameters and see what comes out. Continue to refine your questioning until rhe responses are reasonable. I think you'll be amazed. There's a productivity benefit that's immediate that will translate into faster, cheaper and better releases. You need to be on the wagon if not driving it.
- Should you fear AI, especially when it comes to your job? Well, we all handling things differently and "fear" is an rather subjective term. At minimum you should be prepared and using AI to increase your productivity and really, how competitive will you be when stacked up against someone who's has been using AI competently since the beginning? You need to be incorporating AI into your regular processes so you are familiar with its uses - it should become second nature. This is part of your growth requirements as a Product Manager or any other person involved in software production and delivery. Those tech people without these skills will be passed over more-and-more as time goes by. Those that are really good at using AI will excel.
- So who will be impacted the most? Well, as the use of technology has expanded, there's been a trend in business for middle-management to shrink due to the need for technical people. Basically, the more technical you are, but better you fit into today's technology-driven business world. You'll see the elimination of middle-management roles as an end-game the longer you're involved. Of course there are always exceptions - there will come a point where middle-managers who are good at using AI will no longer need to be techie - but we aren't there yet. Oh, and to be clear, I consider Product Management to be square in middle management.
Next Steps
Okay, so we know that Skynet is coming and that my current Product Manager role is at risk. What can I do to remain relevant?
I've talked about steps to stay abreast of AI but now let's get into why and how you become valuable to a company. Putting aside your current role as a Product Manager as it's currently defined, you'll need to make changes in your thinking and skill-set stay in a reformatted Product Management job - what I'm getting at is that the role will change and you need to change with it. Those changes are still a bit up-in-the-air but, I can think of a few things you should do to prepare:
- Figure out a list of the most useful AI that can be used RIGHT NOW to make your life as a Product Manager better - you'll need to understand what's out there and how it's currently being used - assume that those competing for your role are already using them.
- Stay abreast of what others are using. This may sound easy but we tend to live in an echo chamber when it comes to product management and tech in general. If you're working with React/Python teams, are you keeping up with other tech stacks or are you focused just on those that apply to your job? You need to understand not only what's being used but how it's being used.
- Adjust your thinking to become use your critical thinking capabilities more than usual. I think that most successful Product Managers excel in critical thinking, but at the same time we tend to coast on the systems and processes that we've built to take care of our engineering team and to handle stakeholder demands. It's extremely important to break out of this complacency and start questioning the steps in your process with the idea of using AI utilities. For example, have you looked at Google Sheets in conjunction with a ChatGPT-powered extension?
- Explore bias and how you can better form your routines to eliminate as much as possible by specifying your source datasets. Right now everything is wide-open as the data pool is very large - find ways of refining your datasets so they have less bias, rather than having a bunch of bad data influence your results.
- Be as specific as you can about context. Really context is everything and your ability to understand and define the context will be critical in producing good results. If you haven't been thinking about this, go through the exercise of writing a book or novel and develop a "setting" for what you're trying to do. I usually start by using a stream-of-consciousness model to produce a list, reorganize my list into some type of logical order the write as a narrative.
- Ensure that you are allowing for copyright and patent laws in your queries and routines. I typically do this in iterative passes as a check, then make adjustments. You don't want to run into an established business patent if you can help it. If it's critical to what you're doing then being forewarned is better than finding out through a lawsuit.