NVIDIA’s CES announcements last week have me rethinking the entire computing stack
Last week, I did something I've never done before. I played a 90-minute tech keynote at 70% speed. Jensen Huang's CES 2025 keynote on January 6 was so packed with implications that I had to chew on it. Cud-like.
When I heard his talk, two things happened:
During the talk — “Hey, that’s new and important” —#1 below
Nonstop since the talk —A dawning realization, best summed up as, “Holy shitakes!” — #2 below
#1: NVIDIA is changing the rules of what hardware vs. software does
The latest processor release which Huang announced last Monday — the Blackwell GPU (GeForce RTX 50 Series GPUs, powered by Blackwell and AI ) — does more than just smudge the line between hardware and software… it blurs it totally.
Historically, software dictated the user experience, while hardware runs the software’s instructions. Say vs. do.
But with the new Blackwell processing chip’s built-in AI and increased compute capacity, the hardware (the GPU chip itself) now plays a centerstage role in shaping what the user experiences.
The hardware itself is writing code. And lots of it.
A graphics example to illustrate
Unsurprisingly for NVIDIA (a company revolutionizing the world by leveraging graphics processing tech they invented), Huang used a graphics processing example to bring home the changes.
He described rendering fabric in an animation sequence for a game (below image).
Left side: Traditional rendering requiring 33 million pixels (47MB memory) for four frames of 4K HD video animation.
Right side: Using Blackwell’s AI -powered GPU, the same fabric was rendered by processing only 2 million pixels (16MB memory) with the GPU hardware itself generating the remaining 31 million pixels in REAL TIME — just-in-time code being authored by the hardware processor itself. No external software needed.
Image from https://www.youtube.com/live/k82RwXqZHY8
Why is this latest Blackwell GPU release matters
A few reasons:
Processors can now dynamically "author" code: Blackwell generates additional code on the fly (such as the enhanced fabric above), tailored to real-world scenarios that are constantly changing. All of this without reliance on external software. This was happening in prior NVIDIA GPUs (Ada) but Blackwell takes it further.
Usefulness that goes beyond graphics: Blackwell extends GPU capabilities to general-purpose AI and multi-domain uses like healthcare, autonomous systems— see below.
Hardware+software fusion unlocks new use cases: This evolution makes chips more application-specific and efficient. By integrating code creation and execution on the same physical silicon (are they still using silicon?), Blackwell minimizes processing time and unlocks new, never seen before use cases where speed and computational heft especially matter.
Lower cost of innovation: AI is already reshaping how software code gets written (ask any software developer today) but by having purpose-built chip AI running in real time, that means less code has to be written to begin with to create the intended outcome. That drives down storage costs. That’s less code to test. Less engineer time spent coding. (Hello, AI workforce).
On the downside, centralized proprietary systems like this rarely work seamlessly without significant customization. There will be constraints.
Still, the possibilities are big.
I mean… who else might benefit from:
faster inference across huge, ginormous datasets
much faster processing time (fractions of seconds, not minutes or hours)
using less power and money (less I/O and fewer lines of code to write)
EVERYONE.
Think about these scenarios:
Healthcare: Imagine real-time diagnostic imaging, analyzed collaboratively with AI while the patient is still present.
Autonomous Vehicles: In-car systems that react to novel road conditions with bespoke code written in the moment.
Marketing and Data Analytics: Near-instant insights and statistical/ML-driven predictions about scenarios in customer behavior, avoiding the processing lags of today.
Arts and Culture: This one is complex and potentially contentious so I’ll stay away from AI’s impacts on the creative process for now (SO. MUCH. THERE.) But Arts and Culture organizations could benefit hugely from the same advances in marketing and fundraising data intelligence I alluded to above.
#2: Holy shitakes.
AI is reshaping the entire computing stack (and not how I thought)
Much of Huang’s remaining keynote announcements were about NVIDIA’s rollout of foundational tools for AI-driven robotics, agentic workforces and systems. NVIDIA is increasingly authoring software and models for the rest of the world to use as legos in building useful AI utility.
It was here that the broader implications of all this started to settle for me:
AI’s unique computing needs are driving a reinvention of the entire computing stack — from architecture and chips to libraries and algorithms.
Huang summed it up:
“We can build the architecture, the chip, the system, the libraries, and the algorithms all at the same time,” said Huang. “If you do that, then you can move faster than Moore’s Law, because you can innovate across the entire stack.”
(Source: TechCrunch, Jan 7, 2025)
This is more than just corporate moat-building on NVIDIA’s part. AI’s demand for speed and efficiency is reshaping what good looks like. It’s no longer about cramming more transistors onto a chip (Moore’s Law). It’s about how hardware and software co-evolve, each accelerating each other in tandem.
In fact, if I had to give an alternate definition of Artificial Intelligence, I’d say AI is the natural progression in software to take advantage of advances in hardware which fuels advances in software. (A cycle — for now.)
“Moore’s Law was so important in the history of computing because it drove down computing costs,” Huang told TechCrunch. “The same thing is going to happen with inference [AI] where we drive up the performance, and as a result, the cost of inference is going to be less.”
(Source: TechCrunch, Jan 7, 2025)
AI is building by different rules.
AI is a force-multiplier and it seems to be force-multiplying its own new path, universally. While I knew that — the dust is starting to settle for me on how this is all converging. I can foresee a day where “AI-first” coding is just normal. And this Blackwell chip makes that future scenario seem just around the corner. Even with AI hallucinations… that day is coming.
I look forward to seeing if my inferences hold up about this convergence or if I’ve just fallen for the hype.
Either way, we are all on a journey of constant learning. And sharing and collaborating with each other as we learn is the only way to make sense of the immense changes and opportunities before us with AI.
I’d love to hear your thoughts.