At COMPUTEX 2025 on May 21st, 2025, AMD’s Jack Huynh—Senior VP and GM of the Computing and Graphics Group—unveiled a product vision anchored in one central idea: small is powerful. This year’s keynote revolved around the shift from centralized computing to decentralized intelligence—AI PCs, edge inference, and workstations that rival cloud performance.
AMD’s announcements spanned three domains:
- Gaming: FSR Redstone and Radeon RX 9060 XT bring path-traced visuals and AI rendering to the mid-range.
- AI PCs: Ryzen AI 300 Series delivers up to 34 TOPS of local inferencing power.
- Workstations: Threadripper PRO 9000 and Radeon AI PRO R9700 target professional AI developers and compute-intensive industries.
Let’s unpack the technical and strategic highlights.
1. FSR Redstone: Machine Learning Meets Real-Time Path Tracing
The Technology
FSR Redstone is AMD’s most ambitious attempt yet to democratize path-traced rendering. It combines:
- Neural Radiance Caching (NRC) for learned lighting estimations.
- Ray Regeneration for efficient reuse of ray samples.
- Machine Learning Super Resolution (MLSR) for intelligent upscaling.
- Frame Generation to increase output FPS via temporal inference.
This hybrid ML pipeline enables real-time lighting effects—like dynamic GI, soft shadows, and volumetric fog—on GPUs without dedicated RT cores.
Why It Matters
By applying learned priors to ray-based reconstruction, Redstone achieves the appearance of path-traced realism while maintaining playable frame rates. This lowers the barrier for mid-range GPUs to deliver high-fidelity visuals.
Caveats
The ML approach, while efficient, is heavily scene-dependent. Generalization to procedurally generated content remains an open question. Visual artifacts can emerge in dynamic geometry, and upscaling introduces trade-offs in motion stability.
Competitive Lens
Feature | FSR Redstone | DLSS 3.5 | XeSS |
---|---|---|---|
Neural Rendering | ✅ | ✅ | ✅ |
Ray Regeneration | ✅ | ❌ | ⚠️ Partial |
Open Source Availability | ✅ (via ROCm) | ❌ | ⚠️ Partial |
Specialized Hardware Req. | ❌ | ✅ (Tensor Cores) | ❌ |
In essence: Redstone is AMD’s answer to DLSS—built on open standards, deployable without AI-specific silicon.
2. Ryzen AI 300 Series: On-Device Intelligence for the AI PC Era
The Technology
The new Ryzen AI 300 APUs feature a dedicated XDNA 2-based NPU delivering up to 34 TOPS (INT8). This enables local execution of:
- Quantized LLMs (e.g., Llama 3 8B)
- Real-time transcription and translation
- Code assist and image editing
- Visual search and contextual agents
The architecture distributes inference across CPU, GPU, and NPU with intelligent workload balancing.
Why It Matters
Local inferencing improves latency, preserves privacy, and reduces cloud dependencies. In regulated industries and latency-critical workflows, this is a step-function improvement.
Ecosystem Challenges
- Quantized model availability is still thin.
- ROCm integration into PyTorch/ONNX toolchains is ongoing.
- AMD’s tooling for model optimization lacks the maturity of NVIDIA’s TensorRT or Apple’s CoreML.
Competitive Positioning
Platform | NPU TOPS (INT8) | Architecture | Ecosystem Openness | Primary OS |
---|---|---|---|---|
Ryzen AI 300 | 34 | x86 + XDNA 2 | High (ROCm, ONNX) | Windows, Linux |
Apple M4 | ~38 | ARM + CoreML NPU | Low (CoreML only) | macOS, iOS |
Snapdragon X | ~4.3 | ARM + Hexagon DSP | Medium | Windows, Android |
Ryzen AI PCs position AMD as the open x86 alternative to Apple’s silicon dominance in local AI workflows.
3. Threadripper PRO 9000 & Radeon AI PRO R9700: Workstation-Class AI Development
The Technology
Threadripper PRO 9000 (“Shimada Peak”):
- 96 Zen 5 cores / 192 threads
- 8-channel DDR5 ECC memory, up to 4TB
- 128 PCIe 5.0 lanes
- AMD PRO Security (SEV-SNP, memory encryption)
Radeon AI PRO R9700:
- 1,500+ TOPS (INT4)
- 32GB GDDR6
- ROCm-native backend for ONNX and PyTorch
This pairing provides a serious platform for AI fine-tuning, quantization, and even training of small LLMs.
Why It Matters
This workstation tier offers an escape hatch from expensive cloud runtimes. For developers, AI researchers, and enterprise teams, it enables:
- Local, iterative model tuning
- Predictable hardware costs
- Privacy-first workflows (especially in defense, healthcare, and legal)
Trade-offs
ROCm continues to trail CUDA in terms of ecosystem depth and performance tuning. While AMD offers competitive raw throughput, software maturity—especially for frameworks like JAX or Triton—is still catching up.
Competitive Analysis
Metric | TR PRO 9000 + R9700 | NVIDIA RTX 6000 Ada |
---|---|---|
CPU Cores | 96 (Zen 5) | N/A |
GPU AI Perf (INT4) | ~1,500 TOPS | ~1,700 TOPS |
VRAM | 32GB GDDR6 | 48GB GDDR6 ECC |
Ecosystem Support | ROCm (moderate) | CUDA (mature) |
Distributed Training | ❌ (limited) | ✅ (via NVLink) |
Local LLM Inference | ✅ (8B–13B) | ✅ |
AMD’s strength lies in performance-per-dollar and data locality. For small-to-mid-sized models, it offers near-cloud throughput on your desktop.
Final Thoughts: Decentralized Intelligence is the New Normal
COMPUTEX 2025 made one thing clear: the future of compute is not just faster—it’s closer. AMD’s platform strategy shifts the emphasis from scale to locality:
- From cloud inferencing to on-device AI
- From GPU farms to quantized workstations
- From centralized render clusters to ML-accelerated game engines
With open software stacks, power-efficient inference, and maturing hardware, AMD positions itself as a viable counterweight to NVIDIA and Apple in the edge-AI era.
For engineering leaders and CTOs, this represents an inflection point. The question is no longer “When will AI arrive on the edge?” It’s already here. The next question is: What will you build with it?