XConn’s breakthrough establishes hybrid switching as a cornerstone of next-generation AI infrastructure.
The architecture of modern computing is being reshaped by AI, and at the center of that transformation is a new class of interconnect technology: the hybrid PCIe/CXL switch. These devices sit at the crossroads of compute, memory, and acceleration, acting as intelligent traffic controllers that dynamically route data between CPUs, GPUs, accelerators, and increasingly, pooled memory. They represent a fundamental evolution beyond traditional PCIe switches and are becoming a cornerstone of next-generation AI infrastructure.

Defining the Category: What Is a Hybrid PCIe/CXL Switch?
A hybrid PCIe/CXL switch is a high-performance fabric device that supports both PCI Express (PCIe) and Compute Express Link (CXL) protocols within a single, unified switching architecture. Unlike conventional PCIe switches, which primarily focus on connecting devices like GPUs, NICs, and storage controllers to a host CPU, hybrid PCIe/CXL switches extend this role by also enabling memory expansion, sharing, and disaggregation across multiple devices and nodes.
In practical terms, this means that a hybrid switch must manage two fundamentally different but increasingly intertwined workloads:
PCIe transactions
These are optimized for high-throughput, low-latency device communication between accelerators and hosts.
CXL transactions
These are designed to treat remote memory as if it were local, enabling shared memory pools, memory expansion, and memory tiering across systems.
The innovation of hybrid PCIe/CXL switches lies in their ability to simultaneously handle both device connectivity and memory fabric orchestration, rather than forcing systems to rely on separate networking or switching layers. This creates a unified, coherent interconnect that blurs the traditional boundaries between compute, memory, and I/O.
Architecturally, these switches must support features such as:
Multi-host connectivity
This allows multiple CPUs to access shared devices and memory.
Dynamic partitioning of resources
This enables workloads to scale up or down without physical reconfiguration.
Low-latency paths for accelerator communication
This is critical for AI training and inference.
CXL memory semantics
This includes cache coherency and remote memory access at near-DRAM speeds.
As AI workloads grow larger and more memory-intensive, hybrid PCIe/CXL switches are emerging as the critical enablers of scalable, flexible, and efficient data center architectures.
Memorializing the Industry First: XConn’s Breakthrough
Against this backdrop, XConn has delivered what can rightfully be described as the industry’s first true hybrid PCIe/CXL switch, a milestone that marks a turning point in system architecture for AI and high-performance computing.
While other vendors had previously demonstrated either PCIe switching or CXL memory connectivity in isolation, XConn was the first to integrate both capabilities into a single, production-ready platform that could be deployed in real-world AI and enterprise environments. This was not merely an incremental upgrade, but a category-defining innovation that established hybrid PCIe/CXL switching as a distinct and necessary layer in the AI stack.
XConn’s achievement is significant because it translated the promise of CXL from lab prototypes and standards discussions into a tangible, deployable product. By doing so, XConn bridged the gap between theory and practice, proving that memory disaggregation and accelerator connectivity could coexist within the same switching fabric without compromising performance, reliability, or compatibility.
This “industry first” set a new benchmark for system architects and infrastructure vendors, effectively defining what a hybrid PCIe/CXL switch should be and how it should behave in an AI data center.
Product Overview: Inside XConn’s Hybrid Switch
At a high level, XConn’s hybrid PCIe/CXL switch is designed to act as a central connectivity hub for AI servers, enabling flexible composition of compute, acceleration, and memory resources across racks or even multiple servers.
The switch supports high-bandwidth PCIe Gen5 (and forward-looking support for Gen6) lanes, ensuring that GPUs, DPUs, and other accelerators can communicate at full speed with minimal latency. At the same time, its native CXL capabilities allow it to connect to external memory devices, such as CXL-attached DRAM or persistent memory, creating shared memory pools that multiple hosts can access.
One of the most powerful aspects of XConn’s design is its software-defined fabric control layer, which allows administrators to dynamically allocate and reconfigure resources without physically rewiring systems. This means that an AI workload requiring massive memory can be provisioned on demand, drawing from a shared CXL memory pool rather than being limited to the memory physically attached to a single server.
Additionally, the switch incorporates advanced quality-of-service controls, security features such as encryption and access isolation, and telemetry for real-time monitoring of performance and utilization. These capabilities make it suitable not only for advanced AI research environments, but also for enterprise deployments where reliability, security, and manageability are paramount.
In essence, XConn’s hybrid PCIe/CXL switch functions as the “nervous system” of a modern AI infrastructure, coordinating the flow of data between compute engines and memory resources in a way that was previously impossible.
Market Impact: Redefining AI Infrastructure and Interconnect Competition
The arrival of hybrid PCIe/CXL switches is poised to have a profound impact on the broader computing market.
First, it will accelerate the shift toward memory-centric computing, where performance is increasingly determined not just by raw compute power, but by how efficiently systems can access, share, and manage large memory pools. This will benefit AI training, large-scale inference, graph analytics, and in-memory databases alike, enabling workloads that were previously constrained by local memory limits to scale more fluidly across systems.
Second, hybrid switches will enable more composable and disaggregated data centers, reducing the need for over-provisioning and allowing organizations to pool expensive resources like GPUs and high-capacity memory. By dynamically allocating compute, acceleration, and memory resources through a unified fabric, enterprises can improve utilization, lower capital costs, and design more flexible AI infrastructures that adapt to evolving workloads rather than being locked into rigid server configurations.
Third, XConn’s technology is expected to significantly strengthen the company’s strategic position within the Ultra Accelerator Link (UALink) consortium, an open-standard industry alliance designed to challenge Nvidia’s closed NVLink ecosystem. By delivering a production-ready hybrid PCIe/CXL switch, XConn provides a critical building block for an open, interoperable alternative to proprietary interconnect architectures. Their technology will accelerate customers’ data center interconnect portfolios and directly strengthen their ability to build scale-up switching infrastructure, positioning XConn as a key competitor to NVIDIA’s proprietary NVLink ecosystem.
This competitive dynamic has broader implications for the industry. An open, standards-based interconnect model, anchored in PCIe and CXL, promises greater vendor choice, reduced lock-in, and faster innovation across the ecosystem. As more system vendors, memory suppliers, and cloud providers align around UALink and hybrid switching architectures, the balance of power in AI infrastructure interconnects could shift toward more open and modular designs.
Finally, hybrid PCIe/CXL switches will support the rise of new software paradigms, including AI memory layers, agent-based systems, and distributed model orchestration frameworks that rely on shared, persistent memory. These software innovations depend on hardware that can treat memory as a networked, first-class resource, precisely what hybrid switches enable.
Conclusion
In short, XConn’s industry-first hybrid PCIe/CXL switch does more than solve a technical problem; it helps define the blueprint for the next generation of AI infrastructure. As AI workloads continue to scale, this new class of switch will be indispensable in shaping a future where compute and memory are no longer constrained by physical boundaries, but dynamically composed across open, high-performance fabrics.









