
Farang, a Swedish AI research lab creating next-generation foundational LLMs, has raised €1.5 million in Seed funding.
SUMMARY
- Farang, a Swedish AI research lab creating next-generation foundational LLMs, has raised €1.5 million in Seed funding.
The capital will fuel the scaling of its proof-of-concept models and boost compute infrastructure for training and fine-tuning models in specialized domains.
The round was led by Voima Ventures and the Amadeus APEX Technology Fund, with prominent angel investors including Tero Ojanperä (Co‑founder, Silo AI), Nilay Oza, and Niraj Aswani (Former Founders, Klevu).
Emil Romanus, Farang’s Founder, comments: “We’re not building another application layer on top of existing models. We’ve developed a completely new foundational architecture that enables us to create specialised AI assistants that outperform current solutions in specific domains like programming and medicine, while using twenty-five times less computational resources. Based on current testing, we believe the percentage of resources used will decrease even further in the future.”
Founded in 2025, Farang has developed a new architecture for Large Language Models (LLMs) that challenges the dominant Transformer-based designs used by ChatGPT, Claude, and Gemini.
Read Also - German Company Ortivity Secures €200M Funding
Led by Emil Romanus—a seasoned engineer with over two decades of experience in AI, search, and startups—Farang’s small team of five researchers, developers, and business professionals is rethinking how language models generate responses.
Rather than predicting text word by word, Farang’s model first forms a complete internal representation of the answer—similar to visualizing a painting before putting brush to canvas—then translates that concept into language. Unlike current models that simulate reasoning through added steps but still rely on sequential prediction, Farang’s non textual internal processing enables more coherent output while using significantly less computational power.
“The beauty of our architecture is that it enables us to create highly specialised models for niche use cases like untapped medical domain models that would be prohibitively expensive with traditional approaches” Romanus explains. “We can analyse unstructured medical data or create programming assistants that truly understand specific frameworks not just general coding patterns.”
The funding will be used to scale Farang’s proof-of-concept models and invest in the computing power required to train and fine-tune its architecture for specialized applications, including programming and medical domains.
“Our vision is that companies will have these specialised assistants running on their own infrastructure, integrated with their existing systems,” Romanus added. “A law firm could have an AI that understands its specific practice areas and case history, or a research hospital could have an assistant trained on their unique patient data – all while keeping that sensitive information completely private.”
Farang is initially targeting areas where current AI assistants fall short, such as support for specific programming languages and frameworks, specialised medical domains and internal enterprise tools. Its first focus is React programming, aiming to generate more efficient and adaptable code than existing LLMs.
The company’s architecture also supports on-premise deployment, giving organisations—particularly in healthcare, legal, and finance—full control over sensitive data. This enables businesses to train and run Farang’s models entirely in-house, ensuring privacy and data sovereignty.
“We’re taking a different path than the big tech companies,” said Romanus. “By proving our architecture works in specialised domains first, we’re building the foundation to eventually challenge the current leaders across all AI applications.”
Farang is beginning with niche use cases to validate its technology but has ambitious long-term goals. The company ultimately aims to become a global leader in artificial intelligence, with plans to compete with—and eventually surpass—OpenAI in the broader general AI landscape.
Inka Mero, Managing Partner & Founder of Voima Ventures added: “We look for exceptional Founders and technologies that reset the curve – not just optimise around the edges. Farang showcases how Europe can step up in the global AI race, as its foundational architecture provides a true paradigm shift – specialised, efficient, and enterprise-grade from day one.”
The company is focusing on individual developers and AI enthusiasts as its first users and has opened a waitlist for early access to its technology.
Ion Hauer, Principal at APEX Ventures, said: “We’re constantly evaluating breakthrough technologies that could redefine entire sectors, and Farang caught our attention immediately. For years, the industry has been making incremental improvements to the same Transformer foundation. What Emil and his team have developed represents a fundamental architectural leap – the kind of foundational change we believe will separate the next generation of AI leaders from today’s incumbents.”
About Farang
Farang is an AI research lab dedicated to developing novel, next-generation large language model (LLM) architectures. Its mission is to push beyond current transformer-based models by creating more efficient, specialised AI systems that deliver high performance with significantly lower computational demands, targeting advanced use cases in programming, healthcare and beyond.
Recommended Stories for You

Outmin Funding News -Ireland’s Outmin Secures €4M Fresh Funding To Expand Its AI Finance Platform
Kailee Rainse May 31, 2025