4 Tips To Sell Your Generative AI Solution To Large Enterprises

Find out why GenAI entrepreneurs should prioritize strong guardrails, responsible AI, and keep up with regulations.

April 29, 2024

4 tips to sell your generative AI solution to large enterprises

Matt Carbonara, head of enterprise tech investing, Vibhor Rastogi, head of AI investing, and Cagla Kaymaz, principal of Citi Ventures, highlight the importance of understanding customer needs, navigating competition, ensuring safety and compliance, and finding the right balance between automation and human involvement for GenAI entrepreneurs.

From both the top down and the bottom up, large enterprises are eager to implement generative AI (GenAI) solutions that can help them increase productivity and improve business outcomes. Many are still early in their adoption process and are working through such considerations as whether to build or buy the solutions they need and which vendors to partner with.

This is creating a rare greenfield opportunity for entrepreneurs looking to build enterprise-grade GenAI tools. But while the time to move on GenAI is now, selling a GenAI solution to a large enterprise is an especially complex undertaking given the field’s additional tech requirements, data privacy concerns, and fast-evolving regulatory landscape.

Having spent the last decade-plus investing in AI startups and the last couple of years following the GenAI revolution closelyOpens a new window , we know well what it takes to sell a GenAI product to a large and heavily regulated enterprise. Here are four best practices for doing just that.

Decide if GenAI Is the Best AI for the Job

AI tools built on large language models (LLMs) offer tremendous power and potential, but they’re not the right fit for every use case. Traditional machine learning (ML) models or even decision trees can still perform some tasks better than LLMs. For example, fraud detection, marketing and risk models that primarily work on structured data are still best served by ML, which is great at dealing with large statistical models, making predictions and recommending the next best action with high confidence.

So before you embark on building a GenAI product, make sure you’re doing so not to capture the hype cycle but because LLMs actually help you build a best-in-class solution to a key customer problem. Using LLMs over the models’ enterprises are more familiar with means both taking on significant compute or API costs and undergoing more intensive security scrutiny and onboarding processes — so if you don’t need them, it will be hard to justify the extra effort. 

Furthermore, you may lose credibility with stakeholders for trying to use a sports car when a sedan will do. No matter how cool and cutting-edge the technology is, in order for a large enterprise to spend resources on it, there must be a clear, high-priority business need.

See More: Integrating Human Intelligence In AI Testing 

Find a Way To Take on or Dodge the Incumbents

Unlike prior technological shifts that were largely led by startups, Big Tech incumbents like Microsoft and Amazon have been fast to adopt GenAI and are even driving key advancements in the space. To beat them at their own game, you’ll need to understand which use cases they’re going after and how they’re embedding GenAI into their product suites. That will help you find a unique value proposition that incumbents aren’t in a good position to offer in the near term and/or to execute well for reasons such as deficits in their employee talent profile or limitations in their core competency.

Make Safety and Security Your Top Priority

Large enterprises place immense value on data security and regulatory compliance, particularly when it comes to AI—so to get in the door with them, you’ll have to have strong guardrails for your product. In general, you should embrace the principles of responsible AI, implementing the right policies and standards or partnering with vendors that do so.

For example, many enterprises are worried about data leakage,  and will not let you use their data for model training; make sure they are able to opt out of that and still use your product effectively. It is also important to provide them with role-based access controls so only the people authorized to perform certain tasks or see certain documents are able to do so (this is particularly pertinent to retrieval-augmented generation systems). 

Enterprises tend to have a low-risk tolerance for hallucinations and bias as well, and depending on the use case might have zero tolerance. For example, while it might be okay for a GenAI product drafting marketing copy to hallucinate on occasion, it’s unacceptable when giving financial advice to customers.

Lastly, make sure you’re staying on top of the fast-evolving landscape of global AI regulation and articulate clearly the policies and standards you have in place to remain compliant over the long term.

Figure Out How Much You Want Humans To Be in the Loop

As excited as we are about GenAI’s potential to automate a wide range of human tasks, it still has a long way to go—the non-deterministic nature of LLMs makes it difficult to produce repeatable outputs, even when given the same input. Given many large enterprises’ low-risk appetite, we recommend creating a GenAI product that falls into one of the three categories below depending on the use case and the end-users error tolerance:

  • Copilot: So-called AI copilots are tools meant to foster collaboration between a human user and an AI system; the AI makes suggestions so the user can better perform their task (e.g., GitHub Copilot suggests code to developers as they type). Tolerance for error is high for AI copilots, as the human is still in full control.
  • Human-in-the-loop (HIL): In HIL tools, the AI does most of the work while the human oversees and makes final decisions—for example, an AI-based wealth management tool could draft a portfolio construction recommendation that the human wealth advisor would review and revise as needed before sending it to a client. Though there is some tolerance for error in HIL tools, the AI should be fairly reliable.
  • Autopilot: Here, the AI is fully autonomous and makes all the decisions, such as a customer service bot that converses with a user and can purchase products for them.

At this stage of the GenAI adoption curve, we expect AI copilots and HIL tools to be the most common types available and your best bet in ensuring enterprises will be comfortable using your technology. However, we suspect this will change in the next few years as better testing and monitoring systems make autopilots more viable.

What strategies organizations can apply to sell Generative AI solutions and expand customer base? Let us know on FacebookOpens a new window , XOpens a new window , and LinkedInOpens a new window . We’d love to hear from you!

Image Source: Shutterstock

MORE ON GENERATIVE AI

Matt Carbonara
Matt Carbonara

Head of Enterprise Tech Investing, Citi Ventures

Matt Carbonara leads enterprise tech investing at Citi Ventures and works out of the Palo Alto office. He is looking to meet entrepreneurs innovating in Enterprise IT that could be applicable to Citi. Matt strives to create win-win scenarios that provide customer engagement and product feedback for high-potential startups while motivating innovation and business transformation for both Citi and its clients. Prior to Citi Ventures, Matt held investing, corporate development and operating roles at both startups and large corporations.
Vibhor Rastogi
Vibhor Rastogi

Head of AI Investing, Citi Ventures

Vibhor Rastogi is the Head of AI investing for Citi Ventures globally. Vibhor has experience originating, structuring, and executing deals in North America, Europe, and Asia in a variety of sectors. Vibhor’s portfolio companies have gone public or been acquired by strategic and financial investors for over $6B. Vibhor has also served as interim CEO/COO of portfolio companies and greatly enjoys helping companies with their growth strategy, sales strategy and operational execution of business plans.
Cagla Kaymaz
Cagla Kaymaz is a Principal at Citi Ventures where she invests in enterprise software with a focus on Data/AI infrastructure and applications. Before Citi Ventures, she spent time at Microsoft and early-stage startups where she held product, engineering, and chief of staff roles.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.