Welcome to ExtraMile by SecureITWorld, where we go the distance to bring you conversations with the innovators shaping our digital future. Today, we are excited to have Darrick Horton with us, the Founder and CEO of TensorWave, a company redefining how AI infrastructure is built.
TensorWave is a company that provides powerful performance-focused AI infrastructure. They stand out by offering the massive computing power required for modern AI, but with a unique approach. Instead of relying on closed systems, the company has fully embraced AMD’s open ecosystem, constructing some of the world’s largest and most advanced data centers. They recently made headlines with their deployment of over 8000 AMD Instinct MI325X GPUs and are now rolling out the latest MI355X technology.
Darrick’s background is far from typical for a tech CEO. He began his career working on classified projects at Lockheed Martin’s Skunk Works and now leads TensorWave. He discusses the real gap in the AI market, the challenges of innovation, and how his team is deploying thousands of high-performance chips to offer businesses more flexible and efficient solutions. He also reflects on leadership lessons learned from high-stakes projects where failure was not an option.
1. Launching a firm in the AI sector is not an easy thing to do. What inspired you to start TensorWave, and what gap in the market were you aiming to fill?
Darrick. The idea for TensorWave was born out of a glaring gap in the AI infrastructure market. During my tenure at Lockheed Martin and other tech-forward companies, I noticed a recurring pain point: organizations were desperate for massive computing power to handle AI workloads but were held back by sky-high costs, overwhelming complexity, and vendor lock-in.
We founded TensorWave to democratize access to high-performance AI computing through open ecosystems. The market was dominated by proprietary solutions that trapped customers in single-vendor environments with unpredictable costs. We aimed to bridge that gap by creating scalable, cost-effective AI infrastructure powered exclusively by AMD's open ecosystem, giving businesses predictable performance without breaking the bank or requiring a team of specialists to maintain."
2. You have had an impressive career, from being involved in cutting-edge projects at companies like Lockheed Martin to becoming an entrepreneur. How has your experience influenced your leadership style and innovation in AI?
Darrick. My career path has profoundly shaped both my leadership approach and how I think about innovation in AI. At Lockheed Martin, I learned the importance of methodical precision and reliability when dealing with mission-critical systems. When lives depend on technology working correctly, you develop a different relationship with quality assurance and risk management.
That experience taught me to lead with a balance of ambition and pragmatism. At TensorWave, we live by a simple mantra: aggressively push boundaries in R&D, but ruthlessly prioritize what makes it to market. I'm a firm believer in fostering a culture where innovation thrives through collaboration rather than competition. I empower our teams to take bold, calculated risks, but with one non-negotiable: whatever we ship must meet the highest standards of excellence for our customers.
3. TensorWave just announced deployment of AMD's new MI355X GPUs, making you one of the first cloud providers to bring this technology to market. How does the MI355X stand out from other AI cloud solutions, and what kind of businesses can benefit the most from it?
Darrick. The AMD Instinct™ MI355X is a massive leap forward for AI infrastructure. With 288GB of HBM3E memory and 8TB/s of memory bandwidth, the MI355X unlocks new performance ceilings for large model training and high-throughput inference. That means needing fewer GPUs while achieving faster runtimes and better economics, especially at scale.
What makes it stand out in the AI cloud space is how purpose-built it is. This isn’t a general-purpose GPU retrofitted for AI. The MI355X is optimized from the ground up for the kinds of workloads our customers are running: LLM training, fine-tuning, retrieval-augmented generation, and latency-sensitive inference.
We see the biggest gains for companies building or deploying large language models, enterprise copilots, and generative AI applications that demand both memory capacity and compute throughput. Whether you’re training foundation models or running production-grade inference, the MI355X gives you real power with real control.
We're excited to be adding it to TensorWave's AI infrastructure and getting customers on this very soon.
4. TensorWave has deployed over 8,000 AMD Instinct MI325X GPUs in a dedicated training cluster and is now rolling out MI355X technology. Can you tell us more about this progression and why AMD-focused infrastructure at this scale is so important?
Darrick. Our 8,192 MI325X GPU cluster represents the largest AMD-specific AI training infrastructure in North America, and it's been instrumental in proving that open alternatives to proprietary solutions can deliver superior performance at the enterprise scale.
The success of our MI325X cluster directly enabled our early access to the MI355X technology. When you're deploying thousands of GPUs and optimizing at scale, you develop insights that are invaluable for next-generation hardware. Our experience with the MI325X cluster helped us optimize our infrastructure stack specifically for the MI355X's enhanced capabilities.
The importance of AMD-focused infrastructure at this scale goes beyond just avoiding vendor lock-in. It's about creating an entire ecosystem optimized for open standards. Every layer of our stack has been tuned specifically for AMD architectures across multiple generations.
This deep specialization means our customers get infrastructure that's been battle-tested at scale with AMD hardware.The result is infrastructure that's not just competitive with proprietary alternatives, but often superior in terms of both performance and cost-effectiveness.
5. GigaIO's FabreX technology continues to be a key part of TensorNODE. For those who are not tech experts, can you explain what FabreX does and why it remains so important for AI computing?
Darrick. Imagine your computer's components - the processor, memory, storage, and graphics card. They all need to talk to each other constantly. In traditional computing, these connections are relatively straightforward. But in AI computing, we're dealing with hundreds or thousands of specialized processors that all need to communicate simultaneously without bottlenecks.
Traditional computing infrastructure forces these components to communicate through rigid, predefined pathways that become congested. FabreX technology essentially creates a dynamic transportation system for data, allowing us to reconfigure connections on the fly based on the specific needs of each workload.
What will make this particularly impactful with the MI355X is that these new GPUs have unprecedented memory bandwidth capabilities. FabreX ensures that this bandwidth isn't wasted on communication bottlenecks, allowing our customers to fully utilize the 8TB/s memory bandwidth that each GPU provides. This translates directly into faster training times and more efficient inference.
6. You just announced a massive $100 million Series A funding round co-led by Magnetar and AMD Ventures, along with deploying over 8,000 AMD Instinct MI325X GPUs. How does this funding and infrastructure deployment accelerate your ability to bring cutting-edge technology like the MI355X to market?
Darrick. This $100 million Series A represents a transformational moment for TensorWave and validates our vision of democratizing access to cutting-edge AI compute. Having AMD Ventures as a co-lead investor alongside Magnetar, plus continued support from Maverick Silicon and Nexus Venture Partners, demonstrates strong confidence in our AMD-focused strategy.
The funding will enable us to deploy our massive 8,192 AMD Instinct MI325X GPU training cluster, establishing us as a major player in the AI infrastructure ecosystem. This isn't just about adding capacity; we're creating an entirely new category of enterprise-ready AI infrastructure that delivers the memory headroom and performance reliability that next-generation models demand.
What's particularly exciting is that we're on track to close the year with a revenue run rate exceeding $100 million - a 20x year-over-year increase. This growth trajectory allowed us to be among the first to secure and deploy the latest MI355X technology. The combination of our proven MI325X cluster performance and early MI355X access gives our customers unparalleled choice in AMD's ecosystem.
The strategic investment from AMD Ventures also ensures we have direct access to AMD's roadmap and can optimize our infrastructure for future generations of their technology. This partnership approach, rather than just being a customer, gives us the agility to stay ahead of industry shifts and bring breakthrough capabilities to market faster.
7. TensorWave has partnered with companies like MK1 and TECfusions. How do these partnerships help you maximize the potential of technologies like the MI355X?
Darrick. Our partnerships are fundamental to delivering the full potential of breakthrough hardware like the MI355X. With the massive compute density and power requirements of these new GPUs, partnerships become even more critical.
MK1 brings bleeding-edge advancements in software optimization and inference acceleration. Together, we’re refining how models run on MI355X clusters, achieving faster inference, higher throughput, and lower costs for real-world generative AI use cases.
Our collaboration with TECfusions has become even more important with the MI355X deployment. These GPUs generate significant heat, and TECfusions' advanced cooling technologies have enabled us to maintain optimal performance while improving our compute density by 28% and reducing power consumption. This directly addresses the sustainability challenges of large-scale AI deployments.
We're also finalizing an exciting collaboration with a major cloud provider that will make our AMD-optimized infrastructure, including MI355X access, available through their marketplace. This will dramatically expand our reach and allow customers to benefit from our hardware innovations without managing physical infrastructure.
8. With recent industry reports projecting the AI infrastructure market to exceed $400 billion by 2027 and TensorWave on track for a $100 million revenue run rate, how do you see the company positioned to capture this growing market?
Darrick. "The numbers speak to an incredible market opportunity, but what's most exciting is that we're not just participating in this growth - we're helping define what enterprise-ready AI infrastructure looks like. Our 20x year-over-year revenue growth demonstrates that there's massive demand for alternatives to traditional infrastructure options.
The capacity-constrained market we're operating in means that adding compute isn't enough - we need to add the right kind of compute. Our focus on AMD's ecosystem, combined with our proven ability to deploy at scale, positions us to capture a significant portion of this expanding market. We're bringing an entirely new class of compute to the market.
What gives me confidence is the strategic validation we're seeing. Having AMD Ventures co-lead our Series A isn't just about capital - it's about ensuring AMD's latest technologies are available in the cloud and at scale for leading AI companies and enterprises. This partnership approach ensures we're always at the forefront of what's possible with AMD's roadmap.
Our partnerships with firms like TECFusions and our proven track record with large-scale deployments position us perfectly to meet the surging demand for AI infrastructure. As Piotr Tomasik, our President, puts it - we're solving the critical infrastructure bottleneck facing AI adoption, and the market is responding accordingly."
9. Looking ahead, where do you see the AI industry heading in the next 5-10 years, and how is TensorWave positioned to stay ahead of these trends?
Darrick. "I see several major shifts on the horizon. First, we're moving toward a more distributed AI paradigm, where models will be trained and deployed closer to data sources rather than in centralized data centers. This edge-to-cloud continuum will require fundamentally different infrastructure architectures.
Second, multimodal AI that can seamlessly integrate text, vision, speech, and other data types will become the standard. This will drive demand for more heterogeneous computing resources that can efficiently handle diverse workloads. The MI355X's massive memory bandwidth and capacity make it ideal for these complex, multimodal applications.
Third, I expect significant advances in AI hardware specialization, with accelerators designed for specific types of models. AMD's CDNA architecture is already leading this trend, and our exclusive focus positions us perfectly to capitalize on these advances.
At TensorWave, we're preparing for these shifts through our commitment to open ecosystems and AMD's roadmap. Our product architecture emphasizes flexibility and modularity, allowing our infrastructure to adapt as hardware innovations emerge. Perhaps most importantly, we're building deep partnerships with both AMD and model developers to ensure we understand evolving requirements from both perspectives."
10. TensorWave has achieved remarkable milestones in a short period - from early technology recognition to a $100 million Series A and deploying over 8,000 MI325X GPUs. What has been your most proud moment to date, and what challenges have you overcome to reach this point?
Darrick. "It's hard to choose just one moment, but closing our $100 million Series A with AMD Ventures as co-lead while simultaneously deploying our 8,192 MI325X GPU cluster represents the culmination of everything we've been building toward. This wasn't just about reaching a funding milestone - it was about proving that our AMD-focused vision could scale to enterprise levels and deliver the performance that the market demands.
What makes this achievement even more meaningful is the journey to get here. Early on, many questioned our decision to focus exclusively on AMD when the market was dominated by proprietary solutions. Building the expertise to deploy over 8,000 GPUs while maintaining optimal performance required incredible dedication from our team and deep partnerships with AMD.
The validation of being among the first to deploy MI355X technology, combined with our proven track record at massive scale, shows that we've not just survived the skepticism - we've proven that open ecosystems can outperform closed alternatives. Our 20x year-over-year revenue growth speaks to market demand, but it's the technical achievements that I'm most proud of.
From our engineers who constantly push the boundaries of what's possible with AMD hardware, to our ops team who ensure everything runs like clockwork at unprecedented scale -- it's their passion and creativity that got us here. We're not just offering an alternative; we're proving that choice, openness, and technical excellence drive better outcomes for our customers."
11. If you were not heading this company, what do you think you would be doing?
Darrick. "I'd probably still be knee-deep in cutting-edge tech – just in a different flavor. My roots in nuclear fusion at Lockheed Martin and NASA-funded plasma physics research have left me with a lifelong fascination for solving big, audacious problems.
I've always been drawn to the intersection of technology and impact. Engineering has incredible power to transform lives – whether it's through sustainable infrastructure, education, or healthcare. I might be working on fusion energy, space exploration, or climate solutions.
But honestly? I'm exactly where I want to be. Building TensorWave lets me combine my passion for cutting-edge tech with the thrill of entrepreneurship, all while democratizing access to AI infrastructure and proving that open ecosystems can outperform closed alternatives. Every time we deploy breakthrough technology like the MI355X, we're not just advancing our business - we're advancing the entire industry toward more open, competitive, and innovative solutions."
Discover More In-depth Interviews: