ServiceNow and
NVIDIA announced an expansion of their partnership to fuel a new class
of intelligent AI agents across the enterprise. This includes the debut
of a new high-performance ServiceNow reasoning model, Apriel Nemotron
15B-developed in partnership with NVIDIA-that evaluates relationships,
applies rules, and weighs goals to reach conclusions or make decisions.
The open-source LLM is post-trained with NVIDIA and ServiceNow-provided
data, helping deliver lower latency, lower inference costs, and faster
agentic AI. The companies also unveiled plans to bring accelerated data
processing to ServiceNow
Workflow Data Fabric with the integration of select
NVIDIA NeMo microservices, driving a closed-loop data flywheel process that enhances model accuracy and personalized user experiences.
The
Apriel Nemotron 15B reasoning model represents a significant step
forward in developing compact, enterprise-grade LLMs purpose-built for
real-time workflow execution. The model was trained using NVIDIA NeMo, the NVIDIA Llama Nemotron Post-Training Dataset, and ServiceNow domain-specific data with NVIDIA DGX Cloud on
Amazon Web Services (AWS). It delivers advanced reasoning capabilities
in a smaller size-making it faster, more efficient, and cost-effective
to run on NVIDIA GPU infrastructure as an NVIDIA NIM microservice.
Benchmarks show promising results for the model's size category,
reinforcing its potential to power agentic AI workflows at scale. The
debut of this model comes as enterprise AI continues to rise as a
transformative force-helping businesses address growing complexity,
navigate macroeconomic uncertainty, and drive smarter, more resilient
operations.
To support ongoing model innovation and AI agent performance, ServiceNow and NVIDIA also unveiled a new collaboration on a joint data flywheel architecture
that will integrate ServiceNow Workflow Data Fabric and select NVIDIA
NeMo microservices. This integrated approach curates and contextualizes
enterprise workflow data to refine and optimize reasoning models, with
guardrails in place to help ensure that customers are in control of how
their data is used and processed in a secure and compliant manner. This
enables a closed-loop learning process that improves model accuracy and
adaptability-accelerating the development and deployment of highly
personalized, context-aware AI agents designed to enhance enterprise
productivity.
"With
this new Apriel Nemotron 15B reasoning model, we're powering
intelligent AI agents that can make context-aware decisions, adapt to
complex workflows, and deliver personalized outcomes at scale," said Jon
Sigler, EVP of Platform and AI at ServiceNow. "But the model is just
one part of the innovation. Our collaboration building a data
flywheel-powered by Workflow Data Fabric and NVIDIA NeMo-enables a
virtuous cycle of learning and improvement. This helps us build AI
agents that are contextually aware, deeply personalized, and aligned to
the real-time needs of the enterprise."
"NVIDIA
and ServiceNow share a mission to reimagine employee productivity
through AI tools that help people get more done," said Kari Briski, Vice
President of Generative AI Software for Enterprise at NVIDIA.
"Together, we've built the Apriel Nemotron 15B model to serve as an
enterprise-grade reasoning engine and plan to integrate NVIDIA NeMo
microservices into ServiceNow Workflow Data Fabric, providing a powerful
foundation for intelligent digital agents."
The
new Apriel Nemotron 15B reasoning model and data flywheel integration
will better equip AI agents to meet the growing demands of customers
with continuous data and process feedback. For example, imagine an AI
agent resolving a complex billing issue by pulling in past customer
interactions, reasoning through the problem, and recommending the next
best step-getting faster, more accurate, and more efficient with every
case it handles. Together, ServiceNow and NVIDIA are turning enterprise
data into real-time, personalized action.
This latest milestone builds on the recently announced AI agent evaluation tools and integration of NVIDIA Llama Nemotron models
with the ServiceNow AI Platform to accelerate agentic AI development.
ServiceNow and NVIDIA have a shared vision of designing innovations that
ensure LLMs-and the experiences they're powering-are not only
intelligent, but also measurable, secure, and ready for real-world
deployment. The co-development of the Apriel Nemotron 15B reasoning
model and data flywheel integration marks a natural next step in the
companies' deep partnership-furthering their collaboration to power
enterprise workflows with greater speed, precision, and cost-efficiency.
Availability
The Apriel Nemotron 15B model is expected to be available in Q2 2025.