Software is Eating the World… but will Generative AI Eat it?

Market Perspectives
April 26, 2023
Annie Liao
Associate

*Thank you to the AI experts, superstar startups and industry experts for helping shape this thesis*

Content

Overview: An Inflection Point for Gen AI

🌍 Mapping Out the Gen AI Landscape

🎯 Deep-dive: Defensibility Thesis for AI-native Apps

🚀 Six Future Predictions for Gen AI

Overview: An Inflection Point for Gen AI

The global Gen AI market size is anticipated to reach USD 109 billion by 2030 and grow astronomically at a CAGR of 35% in the next six years (Bloomberg). As Tim Cook, CEO of Apple attests it will: “effect virtually everything we do… every product and every service that we have”. This growing market is a testament to underlying demand and supply side drivers.

This rapid increase in activity has been driven by four key factors

  1. Lowered barriers to entry for builders: Advancements in transformer models, GPU power availability, natural language interfaces, and big data have greatly reduced the cost and time to market for incumbents to build in the space.
  2. Unlocking performance uplift and new innovation capability: New LLMs can outperform traditional models, allowing for new, human intelligence-like higher-order capabilities and use cases; from co-pilots for sales staff to creation of new virtual worlds.
  3. Commercial demand and industry disruption: Post Covid-19, there is increased demand for digitised AI/ML solutions. Trends of hyper-personalisation, modernised workflows and push to lower costs and increased productivity underpin this.
  4. Increased consumer acceptance, but also headwinds with reg and ethics: Despite a drastic inflection for consumer adoption of AI (faster path to PMF and PSF), copyright, hallucinations and model biases remain unresolved headwinds. You can get the summarised download of the latter with Sharly AI’s tool here!

To understand the implications of this, we first start by mapping the Gen AI landscape.

Mapping out the Gen AI Landscape

We segment the landscape into four layers, with three types of players across each:

  1. Tech hyperscalers e.g., AWS, Google, Microsoft
  2. Enterprise SW players e.g., SAP, SalesForce
  3. “Startups”, either AI-native or AI enabled

📌 P.S. we are building an Australian Top 50 Gen AI Startups Market Map, if you know anyone that should be added to the list please submit here

Generative AI landscape
A take on the emerging Gen AI landscape

Apps

Overview: End-user-facing interfaces that incorporate LLMs into the product, either by implementing their own model pipelines (E2E apps) or through third-party APIs.

The verticals segment is nascent, with an influx of players pursuing exciting, new use cases. Hidden Door building an E2E platform for game develops using Gen AI is a great example of this. Non-AI native apps are also adapting to be AI enabled i.e., using a chatbot on an existing SaaS business model. For the functional BUs and productivity segments, we have already seen the competitive landscape begin to mature and market fragment like Codewhisperer vs Github copilot.

Players here will have the highest flexibility with margin and business models, and this will vary by use case.

Right to win: Core value drivers, unique distribution channels (access to end users), access to data, tuning for contextual rules, AI talent, and agility and ability to ensemble multiple models.

👉 See the full deep dive on app defensibility here.

Players: Large tech players will be sparse here due to the agility needed to build across the stack and lack of incentive to do so. Expect SW players to begin vertically integrating into the app layer for functional use cases. AI-native apps have the opportunity to disrupt existing strongholds from existing AI enabled players.

Enablers

Overview: As the app layer grows, the need for MLOps, model sharing and hosting platforms will too; essentially democratising ML. This layer is nascent and growing like Hugging Face which provides model distribution and ML tooling and training environment and new players such as Gloo -enabling companies to tap into domain-specific data. Players typically use white glove B2B solutions or off-the-shelf enterprise licensing to moneitize.

Right to win: Distribution and tooling: Faster and cheaper model deployment, breadth of modalities provided, network effects and community moats. MLOps: Variety and quality of features i.e., data visualisation and data vectorization, UX, cost to use and partnerships upstream. Secondarily, using smart sales strategies and transparency on data lineage will be additional differentiators.

Players: All three players have a right to win in this sector. For hubs, hyperscalers are expected to begin to vertically integrate into this space in the midterm and build stickiness through limiting access to their LLMs like Amazon Bedrock and Titan FM. For tooling, consulting firms such as BCG and Accenture will compete too in the contract race to white-label products. There is room for new incumbents building products specific to Gen AI, but expect relatively high barriers to entry as existing startups horizontally integrate e.g., Relevance AI

LLMs

Overview: This layer includes large LLMs which are either open (free to access) or closed sourced (fees required). LLM players take a picks and shovels approach in many cases to monetisation. Open-source models can be accessed through distribution platforms and are often static but allow more customization. Domain specific models are attuned to verticals and include the likes of BioBert by Nividia to the FoodUDT-1B model which is pre-trained on 500K recipes!

Right to win: Access to capital, AI talent, access to data, speed to deployment, breadth of modalities, partnerships with data providers/end users who have existing user data, partnerships with downstream software and hardware providers for reduced costs.

Players: Although the cost of training models is declining, tech hyperscalers are still expected to win given resources and scale.

Cloud platforms

Overview: Facilitate computing, networking, storage, and middleware for cloud deployment. New startups are deciding which platform to build on, with many trying to build in a platform agnostic way to minimise switching costs as no dominant player has emerged in this market yet. There are many advancements happening daily e.g., Nvidia’s new platform and Amazon’s new EC2 Trn1n.

Right to win: Pricing, partnerships (commitments with workload providers e.g., Azure with OpenAI, Google with Anthropic and AWS with stability.AI) and functionalities. Secondary drivers include optimising the energy efficiency of computationally intensive workloads.

Players: Hyperscalers have a strong right to win in this market given their access to existing offerings.

Specialised hardware

Overview: Accelerator chips optimised for model training and interference workloads. The development of specialised processors such as GPUs and TPUs has made it possible to perform large-scale computations required for AI training.

Right to win: Performance and speed of inferences, cost efficiency, access to capital and distribution e.g., Intel has taken an E2E approach through a build once and deploy all concept.

Players: Oligopoly of large players such as Nvidia, Intel and AMD. Nvidia’s GPUs take up 95% of data center GPUs. Due to the capital-intensive nature, we don’t expect new incumbents to play, unless they have a R&D advantage.

Deep-dive: Defensibility Thesis for AI-Native Apps

For apps, which is what we are seeing a rapid influx in, we look at defensibility through the lens of supercharging the network flywheel below.

Gen AI native application flywheel hot take
Gen AI flywheel for AI-native apps

Five key questions for anyone building or investing in the field to ask:

1. Will people pay for this and is it solving a real problem?

Like any good business, we need to look at the fundamentals. I’ll refer to my colleague/mentor Eric Tran’s approach to this, focusing on the evaluation on the pain point and value proposition using the ‘4U Need’ Framework (is it unworkable, unavoidable, urgent and underserved.

When looking at vertical use cases, assessing SAM and customer willingness to pay for the solution is very important (incremental benefit vs existing solutions).

2. Do you have unique access to data and/or contextual business rulesets?

Data can either come from: I) purchasing/agreements (e.g., data partners), II) ongoing user engagement flywheel (synthetic data) or III) existing legacy data.

Naveen, CEO of MasaicML summarises finds: “When companies build their own datasets, they actually want to bring out unique insights that their competitors potentially don’t have”.

Startups utilize prompt tuning engine on domain specific datasets to embed this (using vectorized databases). Eileen breaks this concept down here. Our hypothesis is smaller sets of supervised, labelled data will provide more benefit than “wider” datasets. This allows for increased accuracy and access moats making it more defensible.

Business rulesets relate to use case specific contextual and sensitivity tunings i.e., for game co-pilots looking at what makes a great event, quest, mission or for Martech, ensuring brand rules are adhered to stringently. Building a “store” of these and fine-tuning the model and user experience for this is a great defensive moat to pursue in addition to data.

Considerations on the hurdle rate risk for startups to get this data flywheel running and the time to achieve this are key.

3. Is there a unique distribution advantage (unique access to the end customer)?

This can be achieved in two ways: I) access to distribution partners (breadth of reach) or II) integrating it into existing workflows (depth of reach). Notably, many startups are choosing to go both B2C and B2B to take a two-pronged approach to accelerate the distribution node of the flywheel.

Startups that are first movers and are able to establish industry partners in long-term contracts and/or those that embed themselves into workflows to ensure stickiness will win in this field. For B2Cs in particular, we look for products with virality. Sharly AI and Lensa are great examples of first moves who have unlocked this advantage.

4. Is the startup set up agilely in a way that learnings/outputs can be quickly iterated?

With lowered barriers to market entry and ability to test product features more quickly, AI-native startups who set infrastructure up agilely will find PMF faster than non-native apps. AutoGPT will supercharge this. To enable this, startups need to build solid data pipelines, infrastructure and storage solutions.

This will be a key differentiator for AI-native apps to out compete existing startups in verticals.

5. Does the team have the technical expertise to execute?

AI tech talent is a scarce resource. Looking for technical co-founder pairs is mandatory in this dynamic environment.

Now that we understand the lay of the land and key drivers, we can hypothesise on what’s to come.

Six Future Predictions for Gen AI

Six future predictions for Gen AI

1. The most defensible Gen AI startups will be AI-native apps that compete in data rich verticals.

We expect a rapid influx of AI-native startups looking to go for a ride on the defensibility flywheel (as above). We believe industries which are both data rich and highly fragmented will be the best to build in. In particular: HealthTech, logistics, FinTech and energy. Accenture’s report has interesting insights on this topic.

Existing businesses, although they may have existing domain data, will find it hard to adapt and deal with tech debt of existing infra; this gives opportunities for AI-native nimble players to enter. Other use cases remain more competitive, as hyperscalers and existing SW players look to vertically integrate.

2. In verticals, value will accrue in replacing highly creative tasks and workflow automation.

New innovations and functions which weren’t previously solvable with “traditional AI” are a hot commodity. Startups tackling higher level creativity and hyper-perso type use cases will benefit greatly. Expect many co-pilot products in the short term, but as tech advances we will begin to see a proliferation of SythAI too (info convergence). We are particularly excited about those tackling: personalised marketing and sales, game and design dev, and customer support. Stori is a great example of this.

3. Rise in B2C and dual B2B and B2C business models in the short term.

Gen AI has resulted in a step change in consumer adoption of AI in day-to-day life; unlocking a new BM that many VC firms have previously disregarded. B2C startups in this field must focus on ways to reduce customer churn and unlock virality.

For dual models, there are two reasons these are pursued: I) accelerate adoption through building demand side customer pull, which should eventually lead to B2B sales II) accelerate Gen AI network effects flywheel by increasing user engagement.

With this new wave of startups, our hypothesis is that many startups will test B2C, iterate quickly and pick the stickiest “use case” for a B2B to scale more efficiently. The rise of AutoGPT will supercharge this testing cycle and disrupt traditional SaaS.

4. Data commoditization and models as a service (MaaS) will rise.

This space is still relatively underpenetrated, and we will see a fierce battle between AI-native startups, hyperscalers and tech consulting firms for large enterprise white labelling contracts. Over time, we will see the commoditization of MaaS i.e Andrew Ng’s concept of “10,000 custom AI models”, and data commoditized i.e., Reddit recently announced fees for its APIs. This opens up a lucrative market for bespoke solutions designed for specific industries and use cases. We are excited about this area and our hypothesis is this market is large and will not be winner takes all.

5. Expect consolidation in the infra layer and vertical integration of players.

The LLM market will consolidate as the market becomes crowded and only a handful of winners will emerge. Players will move upstream into tooling and either functional or general productivity app layers to build distribution defensibility. It will be hard for incumbents to compete in the lower levels of the app layer.

6. Startups which fail to adopt “responsible AI by design” from day one will face big regulatory risk.

AI ethics associated with copyright, liability and privacy will remain front of mind. Countries are adopting regulations in different ways and paces e.g., EU drafting AI Act for >3 years, AUS still establishing ethics framework, China recently announced stringent regulations (see post on China).

Startups should design their services in line with prevailing AI ethics frameworks in home jurisdictions. Ray notes “regulations tend to build upon ethics foundations, so if your business is following them from the start, it sets you up for future regulatory movements”.

So… will Generative AI eat the world/software?

Only time will tell… but the key drivers mentioned in this article will definitely shape and disrupt many industries and functions. I’m optimistic and bullish, but there is still a lot of work that needs to be done for responsible AI and ethics.

Looks like ChatGPT still takes this question a bit too literally…👇

Thank you for reading 🙏

If you are building in the space or want to nerd out about Gen AI, I would love to chat. Please don’t hesitate to reach out or say hi on LinkedIn, Twitter or Email (annie.liao@aura.co).

More content to come in future weeks as we deep-dive further into vertical specific use cases of Gen AI.

Sources