The AI Cloud Spotlight

Emerging startups empowering the fine-tuned revolution

Return the Fund 🚀 

The frontier of tech-focused VC research

In today’s edition:

  • Three emerging startup prospects empowering the fine-tuned LLM revolution

  • A narrative of escaping OpenAI’s grasp, as told by these three companies

FINE-TUNING

OpenPipe

OpenPipe makes fine-tuning LLMs a breeze. It’s built for developers but fully accessible to non-technical users thanks to a comprehensive and intuitive web interface. That’s right—anyone can fine-tune an LLM.

Beyond training, OpenPipe has managed endpoints, allowing developers to run their new models on OpenPipe’s cloud, only paying them for active usage. This feature is more of a convenience than a selling point, as you’ll see when we explore Baseten below.

Why this matters

While small, open-source models (think Llama and Mistral) are not drop-in replacements for large foundational models (think GPT-4o and Claude 3), with careful domain-specific fine-tuning, they can actually outperform large foundational models in niche use cases.

Given that most products use LLMs to handle specifically niche requests, fine-tuning small models is an important alternative to consider due to speed, cost, and control benefits.

Fine-tuning models is both a science and an art. We like OpenPipe because they abstract low-level complexities away from the tuning process. Users can train models simply with a dataset and a couple clicks.

Of course, this isn’t a no-brainer—the magic of a powerful fine-tune still lies in the dataset. Nonetheless, such abstraction will empower a wave of companies to experiment with small models as an alternative to pegging their businesses to the OpenAI API.

Quick facts

  • Led by Kyle Corbitt, an ex-Google former YC director based in the Seattle area

  • YC-backed, with only $6.7 million in seed funding beyond YC’s $500k SAFE

  • 12 seed-round investors, including Y Combinator as a follow-on

INFERENCE

Baseten

Baseten is an AI inference cloud. Simply put, they allow users to use and autoscale models without a thought. From LLMs to small custom neural nets, Baseten provides a seamless deployment and inference experience.

Their focus is on performance, security, and developer experience. Thanks to autoscaling, once deployed, Baseten ensures a rapid and reliable inference experience whether their customer’s product has 5 users or 50,000.

Why this matters

Weaning off OpenAI/Anthropic’s APIs is hard. Startups overfunded in 2023 haven’t had to pay much attention to their unit economics, and were happy to pay OpenAI exorbitantly for state-of-the-art performance and headache minimization. Open-source models however are evolving rapidly while threatening large foundational models in benchmarks.

Now, we’re seeing services emerge to help former OpenAI customers take control over their models and their expenses. While OpenPipe simplifies the model fine-tuning process, Baseten hosts models easily and reliably at scale.

Their optimized hardware delivers high token throughput (how fast a model talks) and low time-to-first-token (the time you wait for a model to start talking). These two issues are core to AI applications, and solving them for users is critical in pushing them away from OpenAI and large foundational model providers.

Quick facts

  • Led by Tuhin Srivastava, a machine-learning expert and former founder

  • $60 million raised to date, $40 million raised in March of 2024 in a Series B

  • Post-money Series B valuation of $220 million

  • Estimated 40 active employees

  • Notable investors (of 25 total) include Dylan Field (Figma CEO), Greg Brockman (OpenAI Chairman and Co-Founder), Greylock, and AI Fund

OBSERVABILITY

Context

Context is an LLM analytics service. Simply put, users connect their existing systems to Context’s API to monitor and evaluate model performance both from a high and low level.

At a high level, they can see general performance, segmentation, failure rates, and more. At a low level, they can inspect individual generation flows to draw conclusions about prompts, agent steps, and model choices.

Why this matters

LLMs are black boxes. It’s impossible to know exactly what’s going on under the hood, especially when models are tuned with high temperatures to maximize randomness and creativity.

After fine-tuning a model, one cannot simply release it into the wild and hope for the best. Thorough testing and evaluation across versions are imperative. Once a model has been vetted for release, developers must monitor its performance not only to ensure proper behavior but to gather data for future fine-tuning.

Observability, analytics, and evaluation are often left out of the small open-source model discussion. Together, fine-tuning, inference, and observability make a small model trifecta that empowers users to break free from OpenAI.

Quick facts

WHO WE ARE

Return the Fund 🚀 

  • One startup a week poised for 10x growth; market deep dives with actionable insights for builders and investors.

  • Technical breakdowns of advanced new tech to empower informed decision-making

  • Find your next prospect, product, job, customer, or partner. 🤝 Written by undercover pioneers of the field; trusted by builders and investors from Silicon Valley to NYC. đź’¸ 

Last week, we discussed whether or not AI agents are investable yet.

As you know, we’re hell-bent on uncovering future unicorns cruising under the radar. Preeminent companies are lean, quiet, and driven before reaching their watershed moments. By the time people start talking about them, it’s too late.

In a nutshell—we pitch you startups like you’re an esteemed VC. If you’re interested in them as a partner, product, or prospect, we’ll make a warm intro. Humbly, our network knows no bounds!

We’ll also intuitively break down advanced tech so you can stay ahead of trends and critically analyze hype in the news and in your circles (regardless of your technical prowess).

Periodically, we’ll propose niche market opportunities. These are tangible ways to extract alpha from the private markets.

You won’t get editions from us very often. Weekly at best. Two reasons:

  1. We’re balancing full-time VC/startup work with Return the Fund.

  2. We prioritize depth, insight, and value. This is not a daily news publication… We hope that when you do get an email from us, it’s dense, valuable, actionable, and worth saving.

Thanks for reading today’s RTF. Let us know what you thought of this edition and feel free to reach out to us at [email protected]. 🤝 

Psst: None of our company picks are ever sponsored. All research and opinions are completely held by the Return the Fund team.

Reply

or to participate.