
Fractile, a London-based AI inference hardware startup, has secured $220 million in Series B funding to advance the development of its next-generation AI chips and compute systems.
SUMMARY
- Fractile, a London-based AI inference hardware startup, has secured $220 million in Series B funding to advance the development of its next-generation AI chips and compute systems.
The financing round was led by Accel, Factorial Funds, and Founders Fund, with additional backing from Conviction, Gigascale Capital, O1A Ventures, Felicis, Buckley Ventures, and 8VC, alongside existing investors.
The company plans to use the new capital to speed up production and deployment of its first silicon chips and AI compute systems for enterprise customers.
Founded in 2022 by CEO Walter Goodwin, Fractile develops AI inference hardware designed to address memory bandwidth bottlenecks and significantly improve the performance of frontier AI models.
RECOMMENDED FOR YOU
[Funding alert] Paris-based Twin Labs Secures $3Million in Pre-Seed Funding
Team SR
Feb 2, 2024
Read Also - Mantle8 Raises €31M Series A To Accelerate Natural Hydrogen Drilling Campaign In France
Its technology aims to deliver speeds of up to 1,200 tokens per second, reducing complex multi-million-token reasoning workloads from months to just days.
To support its full-stack semiconductor and foundry innovation efforts, Fractile has expanded globally with engineering operations across the UK, the United States, and Taiwan.
The startup is focused on tackling one of AI’s emerging challenges: the growing time and cost required to generate useful outputs at scale as AI systems handle increasingly complex and long-running tasks.
According to a post by Walter Goodwin, CEO and Founder of Fractile, the company was founded on the bet that, eventually, the world’s most capable AI systems would be limited in their impact by the amount of time they take to produce useful outputs.
“We bet everything on the logical conclusion: that the only way to truly unlock this latent value, to make speed viable at scale, was to radically re-invent the hardware that we run our frontier AI models on. Ever since, we have been building chips and systems that tackle this problem.”








