- Developers will be able to utilize FriendliAI's accelerated
generative AI infrastructure service to deploy and serve models in
the Hugging Face Hub
REDWOOD
CITY, Calif., Jan. 22,
2025 /PRNewswire/ -- FriendliAI, a leader in
accelerated generative AI inference serving, and Hugging Face today
announced a strategic partnership that allows developers to utilize
FriendliAI's inference infrastructure service to deploy and serve
models directly in the Hugging Face Hub.
FriendliAI Endpoints, the fastest GPU-based generative AI
inference provider according to Artificial Analysis, is now
available as a deployment option on the Hugging Face platform.
Directly from any model page on Hugging Face, developers can now
easily deploy models using FriendliAI's accelerated, low-cost
inference endpoints. This partnership leverages the convenience of
Hugging Face's platform alongside FriendliAI's high-performance
infrastructure, enabling developers to streamline their AI
development workflow and focus on innovation.
Setting up and deploying generative AI models at production
scale presents challenges such as complex infrastructure management
and high operational costs. Friendli Dedicated Endpoints handles
the hassle of infrastructure management, enabling developers to
deploy and serve generative AI models efficiently on autopilot.
Powered by FriendliAI's GPU-optimized inference engine, Friendli
Dedicated Endpoints delivers fast and cost-effective inference
serving as a managed service with dedicated GPU resources and
automatic resource management.
The addition of FriendliAI as a key inference provider advances
Hugging Face's mission to democratize AI, while furthering
FriendliAI's mission to empower everyone to harness the full
potential of generative AI models with ease and cost-efficiency.
With this partnership, FriendliAI becomes a strategic inference
provider for Hugging Face.
"FriendliAI and Hugging Face share a vision for making
generative AI, and further agentic AI, more accessible and
impactful for developers," said Byung-Gon Chun, CEO of
FriendliAI. "This partnership gives developers on Hugging Face easy
access to FriendliAI Endpoints, a fast, low-cost inference solution
without the burden of infrastructure management. We're excited to
see what the amazing developer community at Hugging Face will build
with our inference solution, and we look forward to any future
opportunities to partner with Hugging Face to provide developers
with even more powerful tools and resources."
"FriendliAI has been at the forefront of AI inference
acceleration progress," said Julien Chaumond, CTO of Hugging Face.
"With this new partnership, we will make it easy for Hugging Face
users and FriendliAI customers to leverage leading optimized AI
infrastructure and tools from FriendliAI to run the latest
open-source or their custom AI models at scale."
About FriendliAI
FriendliAI is the leading provider of accelerated generative AI
inference serving. FriendliAI provides fast, cost-efficient
inference serving and fine-tuning to accelerate agentic AI and
custom generative AI solutions. Enjoy the GPU-optimized, blazingly
fast Friendli Inference through FriendliAI's Dedicated Endpoints,
Serverless Endpoints, and Container solutions. Learn more at
https://friendli.ai/.
About Hugging Face
Hugging Face is the leading open platform for AI builders. The
Hugging Face Hub works as a central place where anyone can share,
explore, discover, and experiment with open-source ML. Hugging Face empowers the next
generation of machine learning engineers, scientists, and end users
to learn, collaborate and share their work to build an open and
ethical AI future together. With the fast-growing community, some
of the most used open-source ML libraries and tools, and a talented
science team exploring the edge of tech, Hugging Face is at the
heart of the AI revolution.
Contacts:
Elizabeth Yoon, FriendliAI,
press@friendli.ai
View original
content:https://www.prnewswire.com/news-releases/friendliai-and-hugging-face-announce-strategic-partnership-302357253.html
SOURCE FriendliAI