
This is Not Your Mothers Alpha
June 26, 2025
How much time do you spend aimlessly copy-pasting to and from Claude, ChatGPT, and friends?
Large language models (LLMS) have been limited in how they can connect to the real-world and, as a result, can't directly access information that users most want them to synthesize. The Model Context Protocol (MCP) fixes this barrier by giving tools, resources, and prompts to LLMs. For example, MCP allows chatbots to directly access your email without having to leave the chat window. But the applications are bigger than that: from connecting smart homes and cars, to chatbots that reserve tables and pay for Amazon orders.
At Level, we've rolled out our own MCP geared towards venture. To date, we've enabled a variety of tools for LLMs to use in their reasoning. For example, one tool is organization_knowledge
which can pull internal knowledge for a given company: founder information, fundraising history, extracted "nuggets" from unstructured documents, and more.
We currently provide 11 tools on our investor MCP that cover everything from company information to semantic search, transcript analysis (for earnings, podcasts, etc.) and market intelligence. We have an internal framework that makes it incredibly easy to roll out new tools. Because of our existing infrastructure, it takes only 15 minutes to roll out a new tool.
We've also been experimenting with the concept of resources and prompts. Resources are sets of data or information that you allow the LLM to query based on its own reasoning. Prompts are templated instructions for LLMs that they can use to refine their reasoning process.
Our MCP is not just useful for our own workflows, but also a value-add for partners. We've started rolling out our technology to our GPs and LPs and they've been delighted at the results so far. We're looking forward to collaborating more closely with them on refining our MCP system.
As soon as the MCP specification released, we knew we wanted to work with it. We already build tools for our LPs, GPs, and portfolio companies, so we saw the value in building an MCP server almost immediately. However, the challenge of working with any draft technology is that there's very little support. Currently, there are ~5,000 MCP published servers but only about 1% of them are remote MCP servers; remote MCP enables the full functionality and value of MCP.
To build a proper remote MCP server, you have to enable something called OAuth 2.1 dynamic client registration, which is a somewhat obscure draft specification published by the Internet Engineering Task Force (i.e. the people who design the internet). We also need a way to build MCP-compliant tools, test them, release them as part of our normal API, and set up a development-testing system for all of this. To build MCP, we started directly from the specification and, over the course of a few days, pulled together every bit and piece of this protocol. It was both very fun because we got to work with a brand new technology and also extraordinarily frustrating because we had to debug interaction points with external tools like ChatGPT. But we got there in the end — although with a bit less sleep than we would've hoped!
Fortunately, we also have some good friends who helped us out. To develop our MCP server, we used:
Consider the hypertext transfer protocol (the http://
in nearly every url): it's the foundation of the internet. MCP is for LLMs what HTTP has been for the internet. The MCP standard allows LLMs to interact with the world around them, and while most users may not think about it, these types of protocols underpin the entire networked world. In fact, HTTP and now MCP might be the most important conceptual innovations of the past 50 years.
So clearly, we love protocols to the extent that we'll gladly read an entire document on what the word "must" means (and another on all-caps "MUST"). MCP is the future of LLMs, organizing how LLMs operate and interact with the world. We're watching in real-time as this new technology takes form we're excited to be here for it.