The Intel Ethernet Connection X722 is a network controller embedded into the Intel C624 "Lewisburg" PCH chipset of Lenovo ThinkSystem servers. The controller connects to available 10 GbE and 1 Gigabit Ethernet LAN-on-motherboard (LOM) adapter cards and onboard connectors to provide a comprehensive 1 GbE / 10 GbE networking solution for ThinkSystem customers.
ThinkSystem servers support either 10 Gb Ethernet copper or optical connections, or Gigabit Ethernet connections depending on the server model.
The Intel X722 controller is optimized for data center, cloud, and mobile applications and includes the following features:
- VXLAN/NVGRE Hardware Offloads: These stateless offloads preserve application performance for overlay networks. With these offloads, it is possible to distribute network traffic across CPU cores. At the same time, the controller offloads LSO, GSO, and checksum from the host software, which reduces CPU overhead.
- Low latency: Intel Ethernet Flow Director delivers hardware-based application steering and Intel Data Direct I/O makes the processor cache the primary destination and source of I/O data rather than main memory.
- Virtualization performance: With Intel Virtualization Technology (VT), the controller delivers better I/O performance in virtualized server environments. The controller reduces I/O bottlenecks by providing intelligent offloads for networking traffic per virtual machine (VM), which enables near-line rate speeds for small packets and supports almost an unlimited amount of isolated traffic flows so that you can scale your cloud environment.
- Next-generation VMDq: The controller support up to 128 VMDq VMs and offer enhanced Quality of Service (QoS) feature by providing weighted round-robin servicing for the Tx data. The controller offloads the data-sorting functionality from the hypervisor to the network silicon, which improves data throughput and CPU usage.
- SR-IOV implementation: Provides an implementation of the PCI-SIG standard for I/O Virtualization. The physical configuration of each port is divided into multiple virtual ports. Each virtual port is assigned to an individual VM directly by bypassing the virtual switch in the Hypervisor, which results in near-native performance.
- iWarp RDMA support implements kernel bypass and direct data placement and allows for more efficient high-speed networking by eliminating queues and network related interrupts.
- VM load balancing: Provides traffic load balancing (Tx and Rx) across VMs that are bound to the team interface. It also provides fault tolerance of a switch, port, or cable.
- Auto-detect (PnP) feature for the LOM adapters, enabling you to change LOM adapters (eg from a 1Gb LOM to 10 Gb LOM) and the network interface will automatically reconfigure during the boot process