Skip To Content

Thought Partner

Scott Galloway@profgalloway

Published on August 30, 2024

I’ve known Greg Shove, the CEO of Section and author of Personal Math, for over 30 years. We met when I was chairman of Red Envelope (e-commerce firm). Since then Greg has started and sold three companies. Section (Disclosure: founder/investor) helps upskill enterprises for AI. This week, we asked Greg to create a cheat sheet on how to register personal ROI from AI.


Thought Partner

by Greg Shove

This is the summer of AI discontent.

In the last few weeks, VCs, pundits, and the media have gone from overpromising on AI to raising the alarm: Too much money has been poured in, and enterprise adoption is faltering. It’s a bubble that was overhyped all along. If the computer is what Steve Jobs called the “bicycle for our minds,” AI was supposed to be our strongest pedaling partner. It was going to fix education, accelerate drug discovery, and find climate solutions. Instead, we got sex chatbots and suggestions to put glue on pizza.

Investors should be concerned. Sequoia says AI will need to bring in $600 billion in revenue to outpace the cost of the tech. As of their most recent earnings announcements, Apple, Google, Meta, and Microsoft are estimated to generate a combined $40 billion in revenue from AI. That leaves a big gap. At the same time, the leading AI models still don’t work reliably — and they’re prone to ketamine-like hallucinations, Silicon Valley-speak for “they make shit up.” Sam Altman admits ChatGPT-4 is “mildly embarrassing at best,” but he’s also pumping up expectations for GPT-5.

Unless you’re an investor or an AI entrepreneur, though, none of this really matters. Let’s refocus: For the first time, we can talk to computers in our language and get answers that usually make sense. We have a personal assistant and adviser in our pocket, and it costs $20 a month. This is Star Trek (58 years ago) — and it’s just getting started.

If you’re using “AI is a bubble” as an excuse to ignore these capabilities, you’re making a big mistake. Don’t laugh. I know Silicon Valley tech bros need a win after their NFT/metaverse consensual hallucination. And as you’re likely not reading this on a Meta Quest 3 purchased with Dogecoin, your skepticism is warranted.

Super Soldier Serum

For all the hype of AI, few are getting tangible ROI from it. OpenAI has an estimated 100 million monthly active users worldwide. That sounds like a lot, but it’s only about 10% of the global knowledge workforce.

More people may have tried ChatGPT, but few are power users, and the number of active users is flatlining. Most people “bounce off,” asking a few questions, getting some nonsense answers, and return to Google. They mistake GPT for “Better Google.” “Better Google” is Google. Using AI as a search engine is like using a screwdriver to bang down a nail. It could work, but not well. That’s what Bing is for.

The reason people make this mistake? Few have discovered AI’s premier use case: as a thought partner.

I’ve personally taught more than 2,000 early adopters about AI (and Section has taught over 15,000). Most of them use AI as an assistant — summarizing documents or contracts, writing first drafts, transcribing or translating documents, etc.

But very few people are using AI to “think.” When I talk to those who do, they share that use case almost like a secret. They’re amazed AI can act as a trusted adviser — and reliably gut-check decisions, pre-empt the boss’s feedback, or outline options.

Last year, Boston Consulting Group, Harvard Business School, and Wharton released a study that compared two groups of BCG consultants — those with access to AI and those without. The consultants with AI completed 12% more tasks and did so 25% faster. They also produced results their bosses thought were 40% better. Consultants are thought partners, and AI is Super Soldier Serum.

Smart people are quick to dismiss AI as a cognitive teammate. They think it can automate call center operators — but not them, because they’re further up the knowledge-work food chain. But if AI can make a BCG consultant 40% stronger, why not most of the knowledge workforce? Why not a CEO? Why not you?

AI vs. the Board

Last fall I started asking AI to act like a board member and critique my presentations before I sent them to the Section directors.

Even for a long-time CEO, presenting to the board is a test you always want to ace. We’re blessed with a world-class board of investors and operators, including former CEOs of Time Warner and Akamai. Also: Scott. I try to anticipate their questions to prepare me for the meeting, and inform my decisions around operations.

I prompt the AI: “I’m the CEO of Section. This is the board meeting pre-read deck. Pretend to be a hard-charging venture capitalist board member expecting strong growth. Give me three insights and three recommendations about our progress and plan.”.

Claude and ChatGPT-4o’s performance was breathtaking. AI returned 90% of the same comments or insights our human board made (we compared notes). They were able to suggest the same priorities the board did (with the associated tradeoffs) — including driving enterprise value, balancing growth and cash runway, and taking on more technology risk.

Since then, I’ve used AI to prepare for every board meeting. Every time, AI has close to a 90% match with the board’s feedback. At a minimum, it helps me know most of what Scott is going to say before he says it. A free gift with purchase? The AI is nicer, doesn’t check its phone, and usually approves management comp increases. Let’s call it a draw.

Think of what this could mean for any of your high brain power work. Less stress, knowing you didn’t overlook obvious angles or issues. A quick gut-check to anticipate questions and develop decent answers (which you will improve). And a thought partner to point out your blind spots — risks you forgot to consider or unintended consequences you didn’t think of. Whether interviewing for a job, admissions to a business school, or trying to obtain asylum … I can’t imagine not having the AI role play to better prepare.

Other scenarios where AI has helped as my thought partner:

  • Discussing the pros and cons of going into a real estate project with several friends as co-investors.
  • Getting a summary of all my surgical options, after uploading my MRI, to fix my busted ankle — so I can hold my own with my overconfident, time-starved Stanford docs.
  • Doing industry and company research to evaluate a startup investment opportunity.

Right now, the “smartest people in the room” think they’re “above” AI. Soon, I think they’ll be bragging about using it. And they should. Why would anyone hire a doctor, lawyer, or consultant who’s slower and dumber than their peers? Would you hire someone with a fax number on their business card? As Scott says: “AI won’t take your job, but someone who understands AI will.”

How to Use AI as a Thought Partner

  1. Ask for ideas, not answers. If you ask for an answer, it will give you one (and probably not a very good one). As a thought partner, it’s better equipped to give you ideas, feedback, and other things to consider. Try to maintain an open-ended conversation that keeps evolving, rather than rushing to an answer.
  2. More context is better. The trick is to give AI enough context to start making associations. Having a “generic” conversation will give you generic output. Give it enough specific information to help it create specific responses (your company valuation, your marketing budget, your boss’s negative feedback about your last idea, an MRI of your ankle). And then take the conversation in different directions.
  3. Ask AI to run your problems through decision frameworks. Massive amounts of knowledge are stored in LLMs, so don’t hesitate to have the model explain concepts to you. Ask, “How would a CFO tackle this problem?” or “What are two frameworks CEOs have used to think about this?” Then have a conversation with the AI unpacking these answers.
  4. Ask it to adopt a persona. “If Brian Chesky and Elon Musk were co-CEOs, what remote work policies would they put in place for the management team?” That’s a question Google could never answer, but an LLM will respond to without hesitation.
  5. Make the AI explain and defend its ideas. Say, “Why did you give that answer?” “Are there any other options you can offer?” “What might be a weakness in the approach you’re suggesting?”
  6. Give it your data. Upload your PDFs — business plans, strategy memos, household budgets — and talk to the AI about your unique data and situation. If you’re concerned about privacy, then go to data controls in your GPT settings and turn off its ability to train on your data.

When you work this way, the possibilities are endless. Take financial planning. Now I can upload my entire financial profile (assets, liabilities, income, spending habits, W-2, tax return) and begin a conversation around risk, where I’m missing opportunities for asymmetric upside, how to reach my financial goals, the easiest ways to save money, be more tax efficient, etc.

The financial adviser across the table doesn’t (in my view) stand a chance — she’s incentivized to put you into high-fee products and doesn’t have a billionth of the knowledge and case studies of an LLM. In addition, she’s at a huge disadvantage as the person in front of them (you) is self-conscious and unlikely to be totally direct or honest — “I’m planning to leave my husband this fall.”

Coach in Your Pocket

We all crave access to experts. It’s why people show up to hear Scott speak. It’s why someone once paid $19 million for a private lunch with Warren Buffet. It’s why, despite all the bad press re: their ethics and ineffectiveness, consulting firms continue to raise their fees and grow.

But most of us can’t afford that level of human expertise. And the crazy thing is, we’re overvaluing it anyway. McKinsey consultants are smart, credentialed people. But they can only  present you with one worldview that has a series of biases including how to create problems only they can solve with additional engagements, and what will please the person who has a budget for follow-on engagements. AI is a nearly free expert with 24/7 availability, a staggering range of expertise, and — most importantly — inhumanity. It doesn’t care whether you like it, hire it, or find it attractive, it just wants to address the task/query at hand. And it’s getting better.

The hardest part of working with AI isn’t learning to prompt. It’s managing your own ego and admitting you could use some help and that the world will pass you by if you don’t learn how to use a computer, PowerPoint … AI. So get over your immediate defense mechanism — “AI can never do what I do” — and use it to do what you do, just better. There is an invading army in business: technology. Its weapons are modern-day tanks, drones, and supersonic aircraft. Do you really want to ride into battle on horseback?

Greg Shove

P.S. Last week Scott spoke with Stanford Professor and podcast host Andrew Huberman about the most important things we need to know about our physiological health. Listen here on Apple or here on Spotify.

 

Comments

34 Comments

  1. Tal Karczag says:

    How do you upload a PowerPoint or PDF or an image to ChatGPT? Or do you use a different tool?

  2. Richard Custer says:

    I agree with the artice, Greg, and using AI is a game changer unlike any one of us has experienced before. At the end of the article, though, the following thought popped up: when will the McBaiCGs of the world have their own AI in projects that are programmed to give “half-ass” answers so you hire them? A fereemium if you will… or, even more perverse, the programmers behind the guiding algorithms in AI are paid to program tweaks in the answers so the AIs loose “inhumanity” and give directed answers? Much like the fake news and deep-fake images we get to see nowadays. This is a new world where we will depend more on AI, but we will have to learn more critical thinking skills.

  3. John Brewton says:

    One more thing:

    Lastly, one of my favorite applications happens when I’m working on something and suddenly remember a book I read 14 years ago that is super relevant but that I can’t remember the intimate details of. No problem, start a new conversation with your favorite GPT about the book and its ideas. What’s important on this front is to keep drilling down. Don’t just accept the first order response and feedback, ask for more details, on specific chapters or within specific parts of a story. The output should blow your mind and deeply support your production of better work.

    • Greg Shove says:

      great use case. and then you can also ask for another book with which to continue the research or provide a diff POV

  4. John Brewton says:

    Love this, thank you.

    I’m solely using Chat, Perplexity and Claude as my search engines and have had limited issues. Sometimes the links provided are glitchy, but it works. The key command integration of Chat into my new MacBook Pro has also made this easier.

    I remember when the BCG story hit, it was an inflection point that caused me to invest considerable time and dollars in learning all I could.

    I use the various platforms in the precise way outlined in this essay. People have to stop thinking of it as an extension or similar use case to search. The power and wonder is how much feedback and work it provides when you provide more context and ask sequential and thoughtful groups of questions. It becomes an incredible strategic partner when trying to problem solve and put presentations and written work together. The increasing capacity of the systems to also remember prior conversations and relate new questions, sometimes days or weeks later to what you’ve previously discussed is amazing.

    • Greg Shove says:

      Those studies are not easy to run, but feels like it’s time for a new one. We are still lacking enough credible, large enough case studies for productivity improvements from Gen AI.

  5. Stop Ucch Now says:

    If you’re wondering exactly what the young lady did, she made an appearance so that her friend could get the unethical hypnotherapist to plant a crush on me, with me set to be crushed next time I ran into her. The worst part is that they did it not long before the lockdowns started, so it was a few months longer than the two unethical psychologists anticipated. I got the impression some people knew that they planned to humiliate me, enough people that they couldn’t vouch for the mental health of the people in range to know. They’d tried a similar stunt spectacularly once before at a major event, with a lot of people potentially in a position to know how they were intentionally humiliating me in a crowd. I do feel humiliated, but I was more concerned about the sheer stupidity of doing that publicly. If her new employer isn’t concerned about ethics, she should be concerned that she hired an idiot.

  6. David Goldberg says:

    18 months ago I was the guy whose ego thought GPT couldn’t possibly benefit me (hindsight: fear). I learned to use it as you prescribe. It didn’t take long. It hasn’t changed the game for me, but I’ve been able to put a better team on the field. With better plays. And coaching staff. Nice article, thanks!

  7. Stop UCCH Now says:

    I’d like to point out that the father of the man who tried to kill Trump and did murder another human being was a psychologist.

    There’s been an unethical hypnotherapist in the Pittsburgh area going after some poor guy for years. I’m that guy.

    Other Pittsburgh-area psychology mills have known about this. I’ve known that since I ran into a psychologist from another mill helping her unethical colleagues hurt me in a public place. Since she’s moved to another psych op in the area, the knowledge of the unethical hypnotherapist’s unethical war on my mind must be fairly common among Pittsburgh area psychologists.

    Pittsburgh area mental health professionals should make sure that unethical hypnotherapist stops pronto.

    • Stop Ucch Now says:

      I will also point out that while I realize that Thomas Crooks had a hard life because of bullying, I could never condone what he did. He committed murder, and that’s always wrong.

      I can’t recommend psychologists because I’ve seen some very unethical conduct here, but I ask anyone who is bullied and despondent to think of something they’d like to live for. Everyone has something that’s worth living for and looking forward to.

      I also ask anyone who wants to be a bully to realize that your words have impact, and it’s never good. I hope there are a lot of bullies rethinking their words when something like this happens.

      • Stop Ucch Now says:

        If anyone is upset with what I’ve said, keep in mind that the psychologists I traced the unethical hypnotherapy to had an ethical duty to stop the unethical hypnotherapy the FIRST time I asked. If I have to keep asking, they’re the ones you should be upset with.

        I had a bad experience with a company, and I was trying to get out more to get over the experience, and your unethical hypnotherapist interfered with that, apparently for the company that I had the bad experience with. That also means that I was hurt by the unethical hypnotherapist in front of a lot of people, so I really doubt that it’s much of a secret that no one who values their mind should trust any Pittsburgh hypnotherapists until the one who’s violating me has been stopped.

        • Stop Ucch Now says:

          Scott, I’m not sure if you like what I’ve had to say, but at this point, I hope it will help young men who’ve been treated like discards by society. Yeah, I hope to end a lot of pain I’ve suffered unjustly, too, but that should be understandable.

          • Stop Ucch Now says:

            By the way, the current employers of the rather unethical young lady have been asked to stop the unethical hypnotherapy she participated in in a public place. They really should, because I would think people around us knew she was helping some really bad colleagues do something wrong to me. I know too many people must have let her know they’re okay with that, but the practice should still just accept that their schedule for today is finally doing the right thing, since they’ve kind of said they do the wrong thing by letting her into their practice.

  8. Polachek says:

    Adoption of AI will be from the bottom up. Small businesses are adopting and using AI faster than enterprises. AI is the first new tech that small businesses will lead the way on.

    • Greg Shove says:

      Agreed – small biz, entrepreneurs, side hustlers – they all need the edge and will take the risk. Most large enterprise getting stuck – in part because they treat it as a software deployment. It’s not software – it’s not optimizing an existing workflow but creates new ways of working.

  9. Marc Hershon says:

    You get it. The ways you outline using AI (or Timmy the Robot as I affectionately refer to the combined models that I tend to use interchangeably) is exactly how I immediately began to engage with it. I’ve found that the more conversationally I am with my input, the more that model engages with me in a like manner and even emulates excitement for something that I appear to be excited about. Have done a lot of writing collaboration over the years, that excited engagement is pretty much the kind of feedback and interaction you want from a thought partner. Soon more people will begin to understand and pick up on this kind of relationship with AI but, for now, I can’t help but feel like I’m just slightly ahead of the pack.

  10. John Toriello says:

    All good and I see the point of feeding more info before starting the conversation, BUT how do you assure yourself that your info (sensitive in all likelihood) remains confidential?

    • Greg Shove says:

      Set your privacy settings as “do not train the model” – eg in ChatGPT

  11. Mike Reed says:

    Best post you have written so far. Did AI write it? 😂 I think you have finally convinced me to sign up for one of your classes!

  12. Henil says:

    Hey Scott,
    Really admire you and your podcast on richroll is really great message for folks like me, I hope messag gets to you.
    Best,
    HP

  13. mdv99 says:

    As a software developer, I have to admit I initially dismissed it as over-hyped. Now, I can’t imagine not having it. Coding challenges that would have taken me hours to figure out can be solved by ChatGPT in seconds. More recently, I’ve used it to improve my code simply by posting it and asking, “Is there a better way to do this?” Sometimes I’ve asked it to solve really esoteric issues, and the answers are occasionally astonishingly good.

    My observations as a heavy user:

    You need to be as descriptive and specific as possible to get the best answers.
    It’s a tool, and you still need mastery of the subject you’re querying about to know if the answer is good. Frequently, it will get me 95% of the way to what I want, and that’s valuable because I can fix the remaining 5%. However, if you don’t know what you’re asking about, that 5% might as well be 100%.
    The answers are only as good as the ‘ocean’ of information out there. For example, if you’re working with a new technology like Blazor, the usefulness of the answers drops considerably because it just doesn’t have enough source material from which to formulate an answer.
    It will be interesting to see where we are in 5 years. It’s already proven to be a valuable tool in many arenas, but whether or not it ushers in the ‘singularity’ remains to be seen.

  14. michael schrage says:

    thanks for this…i very much agree with this them and thrust….i’ve found both claude and chatgpt superb tools/platforms for ‘thinking out loud’…..that said, be careful of this ‘false dichotomy:’
    “The hardest part of working with AI isn’t learning to prompt. It’s managing your own ego and admitting you could use some help and that the world will pass you by if you don’t learn how to use a computer, PowerPoint … AI. So get over your immediate defense mechanism — “AI can never do what I do” — and use it to do what you do, just better.”
    my own experience – personal, professional, consultative and executive education – i.e., have run scores of ‘promptathons’ worldwide – is that ego and defensiveness are less factors than the foolish pursuit of ‘the right’ or ‘the best’ ‘answers’…..i see the challenge as making a genuine commitment to using ‘iterative prompts’ as media and mechanisms to surface novel, innovative and provocative ‘patterns’ to reflect on, engage with and mull….absolutely explore/exploit ‘personae’ and make the LLM ‘explain itself’ and justify its responses….but – trust me – you’ll learn an awful lot about your affective/effective cognitive – and metacognitive – style from reviewing your ‘dialogues’…..

  15. Andy says:

    Best thing I have read on AI all week.
    Also, the type size on these comments is too small to be read by adults. Please make it bigger. Thank you!

  16. Nick says:

    Completely agree with the use cases in this post. The best use of AI isnt when you are copy pasting its output but when you are using it to flesh out your ideas and make rough drafts that you revise into a polished product.
    Great article.

  17. Jim Cockrell says:

    If we replace Scott with AI, the online content would halve as he seems to be on every show.
    Love his irreverence, insights and ability to contradict himself on a regular basis. Ever considered politics?

    Loved the article and the different perspective on how to use AI. It also has confirmed my thoughts on the hype vs reality of the current market

  18. Yariv says:

    Good read, but I think that the figures you put against the monetization of AI by those companies is false and misleading. Practically ALL of Google’s products are driven by AI, the algorithms that show you search results and match your query to advertisers is AI, the best route selected by Google Maps is AI, The algorithm that picks the next video on YouTube is AI… You could argue that all of the revenue generated by those products can be attributed to AI.
    Perhaps the right context for this article shouldn’t be AI, but chat bots, which is in itself but one specific AI application.

    • Greg Shove says:

      Fair point. I was referring to Gen AI, and chat-based LLM’s in particular.

  19. Don Surphlis says:

    Great post. Great examples and recommendations. I look forward to the opportunities AI can enable.

  20. Alan "Butch" Andreini says:

    Thank you—this is great! My favorite suggestion, that is confirmed by my still limited experience, is to keep questions open-ended and aimed at starting a discussion rather than driving toward a specific answer.

  21. Lucy Galbraith says:

    Diversity of thought comes from diversity of experience. Beyond checking demographic boxes, how do you ensure your AI friend has incorporated diverse perspectives?
    As long as you are using an MBA-focused, tech-savvy, “normal” AI, you will miss the insights of everyone else. Maybe that works for you, but my experience has been that it’s a big hazard in many contexts.(I’m retired. I am NOT seeking involvement in making AI better. I am noticing something that would have been very risky in my non-corporate work.)

  22. Paul says:

    At last an article on AI that makes sense. I’m in!

  23. Jim Carlson says:

    Hi guys. Very good article. FYI, i am a professor at NYU Law School and have been trying for two years now for the law school to provide this training for the law students. Now, I take 10 mins at the end of most classes to show AI live solving legal problems. if you are around NYU, grab a coffee. jbc

Join the 500,000 who subscribe

To resist is futile … new content every Friday.