Rivian Isn’t Relying on Nvidia for Self-Driving Tech, and Nvidia Investors Should Take Note
Rivian’s push to build its own autonomous technology signals a growing desire among customers to explore options beyond Nvidia.
During Rivian’s Autonomy & AI Day, the company unveiled plans for an updated hardware platform on its R2 models, due to launch end of 2026, centered on an in-house chip designed to run Rivian’s self-driving software.
Rivian isn’t a big Nvidia customer, so this isn’t about lost revenue today. Rather, it highlights a broader trend: more companies want to reduce their dependence on Nvidia’s expensive AI chips. That matters because Nvidia’s current stock price already reflects years of assumed dominance in the AI processor market.
Rivian is building its own silicon
At its first Autonomy & AI Day, Rivian introduced an updated hardware platform that includes an in-house inference chip. Inference is the phase where a trained AI model makes real-time predictions in the vehicle.
Rivian founder and CEO RJ Scaringe expressed enthusiasm about the company’s autonomy and AI efforts. In Rivian’s press release about the custom silicon, he stated that the updated hardware platform, featuring an in-house 1600 sparse TOPS inference chip, will enable significant progress toward true self-driving and, ultimately, Level 4 autonomy.
In plain terms: Scaringe believes Rivian’s new inference chip will help the company move closer to fully autonomous driving without human intervention.
For Nvidia investors, the takeaway isn’t about Rivian’s timing or feature list. It’s about the possibility that Rivian can satisfy its needs with a self-built chip instead of relying on Nvidia’s computing solutions.
Nvidia will remain strong
Nvidia still operates in a different league. Its fiscal third-quarter revenue stood at $57.0 billion. The data-center segment dominates this figure, bringing in $51.2 billion and rising 66% year over year, underscoring how central AI infrastructure has become. That compares to Rivian’s third-quarter revenue of roughly $1.6 billion.
Even so, Rivian’s move should worry investors a bit when seen alongside other tech companies pursuing in-house AI silicon.
More options are emerging
The broader issue is that Rivian isn’t alone in seeking to curb AI costs.
Alphabet has developed its own tensor processing units (TPUs) and offers them via Google Cloud as an Nvidia alternative. TPUs are custom accelerators designed specifically for machine learning, used for both training and inference.
Amazon is following a similar path within AWS. It has developed Trainium chips for training AI models and recently announced that its new Trn3 UltraServers can scale up to 144 Trainium3 chips for large workloads.
While there isn’t a perfect substitute for Nvidia’s GPUs, Rivian’s move is the latest sign of a growing trend toward reducing dependence on Nvidia.
Impact on Nvidia and the market
Yes, Nvidia can continue to grow even as competitors expand their own AI silicon options. But growth could slow and margins may tighten if customers gain credible substitutes, even if the competition’s impact is modest.
Nvidia remains the market leader, and demand for its latest platforms is exceptional. Management highlighted strong demand for Blackwell GPUs and noted that cloud GPUs are selling out.
However, Nvidia trades at a high valuation, with a price-to-earnings ratio around 44. Investors have baked in sustained rapid growth and little margin pressure for years to come. If customer behavior shifts even slightly toward in-house or alternative chips, it could unsettle sentiment.
Bottom line
Rivian’s decision to develop its own autonomous chip won’t determine Nvidia’s fate, but it embodies a broader build-versus-buy dynamic already shaping the strategies of major cloud and tech players. Investors should watch this trend closely, as it introduces real-risk to Nvidia’s pricing power and growth assumptions if more customers opt for self-built or alternative AI silicon solutions.
What do you think about automakers and big tech developing in-house AI accelerators? Is this a trend that could meaningfully reshape the AI hardware landscape, or will Nvidia’s leadership prove too entrenched to challenge?