Investing in MatX, the future compute platform for AGI

Writing

We're thrilled to announce our seed investment in MatX, the future compute platform for AI. MatX is building chips that train LLMs 10x faster than what’s currently offered by NVIDIA and other players, and was just profiled by Ashlee Vance in Bloomberg.

MatX is led by founders Reiner Pope (CEO) and Mike Gunter (CTO), who have prior experience building chips, as well as the software they run on. Both are ex-Googlers with a combined 35 years experience in chip design, ML, and LLMs. Reiner helped build Google PaLM and the world’s highest-performing LLM inference software. Mike helped build Google TPUs, plus designed or architected 11 different chips across six different industries. The team has deep experience in ASIC design, compilers, and high-performance software.

The $25M seed round was led by Nat Friedman and Daniel Gross, plus incredible funds like Homebrew and SV Angel, as well as leading LLM and AI researchers from companies including OpenAI, Anthropic, and Google DeepMind.

MatX's team
MatX team

Why compute — and MatX — matters:

Compute is currently the bottleneck to large-scale LLM training and inference. The market standard—NVIDIA’s H100 chip—is expensive and could face lead times upwards of six months. These GPUs also target all ML models, rather than just LLMs.

MatX is working to fill this crucial gap with a chip designed specifically for LLMs. By optimizing for large dense matrix multiplications that form the majority of operations in the transformer architecture that powers products like ChatGPT, MatX is building a better, faster chip at the same cost as NVIDIA. While other cloud hosting providers like Google (GCP), Amazon (AWS), and Microsoft (Azure) are also working to build their own chips, MatX's chips aren't limited to a specific platform, so they can potentially access greater economies of scale. With LLM companies set to spend billions of dollars over the next decade on training and inference for their models, MatX will challenge NVIDIA’s market dominance and help build the future of AI.

MatX is currently hiring across software, compiler, machine learning, silicon and hardware systems engineering roles. To learn more, read the Bloomberg profile, visit the website or view the team's open roles, all in-person in Mountain View.