Industry executives and experts share their predictions for 2024. Read them in this 16th annual VMblog.com series exclusive.
How GenAI Will Transform Core Infrastructure in the Coming Year, From Custom Chip Development to Autonomous Driving
By John Hayes, CEO and founder, Ghost
Autonomy
2023
was dominated by breakthrough after breakthrough in AI, perhaps none bigger
than OpenAI's release of multi-modal LLMs (MLLMs). These new models are capable
of understanding and drawing conclusions by combining video, images and sounds,
opening up millions of new applications across nearly every industry. In 2024,
on the back of MLLMs, I expect AI will dramatically expand its reach into all
areas of technology, fundamentally changing how we write software and build
hardware.
1. APIs will no longer speak in code, but instead in words and
pictures
What we currently think of as APIs will soon become obsolete as
LLMs usher in a new era of unstructured machine-to-machine communication.
Traditional APIs and their reliance on structured data will cede dominance to
natural language interfaces, enabling dynamic communication through text
snippets and images. Businesses and developers will adopt LLM-powered
interfaces that can interpret natural language and media inputs for more
efficient, human-like interactions.
Already available in applications like QR code scanning - where
visual data is translated into meaningful actions - this approach ensures data
transmission is streamlined and secure. Next gen APIs will leverage LLMs
to understand complex queries that will provide truly seamless integration
between applications, simplifying integration processes and enhancing user
experiences.
2. AI supermodels will rapidly replace purpose-built models for
millions of use cases
LLMs will turn the process of developing AI models inside out.
Instead of relying on massive real-world data sets and human intuition,
developers will be able to start with knowledge-rich models, fine-tuning them
with a handful of samples for precise outputs. As these models become more
intelligent, extensive fine-tuning becomes less necessary, making AI
development more accessible and commoditized - you'll have to build fewer and
fewer products customized to any particular model.
While specialized models might still find niches in scenarios
demanding specific performance or latency requirements, the trend is clear:
eventually the current explosion of special-purpose models will consolidate
into a "supermodel" with general intelligence that will be able to directly
solve a wide array of really specific problems in AI. The rise of these
supermodels will usher in an age where AI solutions are not just intelligent
and deliver superior performance but are also economically viable and easier to
build.
3. Move over data, LLMs will be the great equalizer in the AV race
to full autonomy
Thanks to generative AI, data is no longer the new oil when it
comes to autonomy. Until recently, the ultimate competitive advantage
established players (read: Tesla) had was access to enormous data sets, sourced
in the real world, for training their advanced AI and autonomous driving
models. However, generative AI has erased this advantage by enabling anyone to
create vast synthetic data sets at low cost, eliminating the requirement to
collect so much real-world data to power models.
LLMs - with their fusion of general intelligence and specialized
problem-solving capabilities - will serve as the great equalizer in the AV
space. They will empower emerging companies to compete at a comparable level to
the incumbents, fostering innovation and competition in the race towards fully
autonomous vehicles. Generative AI has changed a critical part of the equation
and is dismantling one of Tesla's biggest moats in the process.
4. The fervent pace of AI innovation will crush custom chip
development
Developing custom chips in today's fast-paced environment is a
misguided effort and locks companies into yesterday's innovations. Businesses
that invest time and resources into making their own chips are unable to
quickly adapt to unforeseen and sudden bursts of innovation. The most recent
example of this is the advent of large language models (LLMs). All innovations
- like the recent flurry of advances in LLMs - are always built on an
open-standard industry leader. This won't change. Going forward, we'll see an
industry-wide standardization on chips as companies fight to maintain
competitive advantage amid the breakneck pace of innovation we're witnessing in
the market.
5. 2024 will see the rise of new data centers, optimized for large
models
In 2024, we'll increasingly see large models replacing the many
smaller, purpose-built models that companies have previously relied on to
create and train their AI models. As large models come to dominate across use
cases in the coming years, new types of data centers optimized with compute at
scale to run large models (i.e., kitted out with supercomputers) will replace
traditional data centers outfitted with stacks of x86 machines.
##
ABOUT THE AUTHOR
John Hayes is CEO and founder of autonomous vehicle software
innovator Ghost Autonomy. Prior to Ghost, John founded Pure Storage, taking the
company public (PSTG, $11 billion market cap) in 2015. As Pure's chief
architect, he harnessed the consumer industry's transition to flash storage
(including the iPhone and MacBook Air) to reimagine the enterprise data center
inventing blazing fast flash storage solutions now run by the world's largest
cloud and ecommerce providers, financial and healthcare institutions, science
and research organizations and governments. Like Pure, Ghost uses software to
achieve near-perfect reliability and re-defines simplicity and efficiency with
commodity consumer hardware. Ghost is headquartered in Mountain View with
additional offices in Detroit, Dallas and Sydney. Investors including Mike
Speiser at Sutter Hill Ventures, Keith Rabois at Founders Fund and Vinod Khosla
at Khosla Ventures have invested $200 million in the company.