IOCREST PCIe 4.0x1 10GbE NIC Review

Foreword

Computing performance continues to advance at a remarkable pace. Modern CPUs now routinely exceed 4GHz base frequencies, up from the sub-3GHz speeds of previous generations. Memory bandwidth has seen similar improvements, with DDR5 reaching 6400MT/s and beyond – double the 3200MT/s speeds common in the DDR4 era. The PCIe standard has kept pace, evolving from PCIe 3.0's 8GT/s per lane to PCIe 4.0's 16GT/s per lane.

This rapid advancement has highlighted an interesting challenge: enthusiasts often face a shortage of PCIe lanes rather than bandwidth limitations. Consider storage requirements – most consumers don't need the astronomical 32GT/s per lane bandwidth offered by PCIe 5.0. Even PCIe 5.0x2 (equivalent to PCIe 4.0x4) provides ample speed for demanding workloads like multi-stream 4K video editing. While PCIe 5.0x4 SSDs certainly have their place, those lanes might be better utilized for expanding platform I/O capabilities.

Meanwhile, 10GbE networking has remained relatively stagnant. The venerable Mellanox ConnectX-3 CX311A 10G SFP+ NIC, released in December 2013, utilized a PCIe 3.0x4 interface – a necessity at the time, as PCIe 4.0 didn't exist and PCIe x2 wasn't standardized. Four lanes of PCIe 3.0's 8GT/s bandwidth were required to achieve 10G performance, albeit wasteful, but that was not a problem server markets have.

July 2019 marked a significant milestone when AMD introduced the X570 chipset, bringing PCIe 4.0 support to the mainstream market. With double the bandwidth of PCIe 3.0, PCIe 4.0 theoretically enabled 10G networking over a single lane – matching the performance that previously required four PCIe 3.0 lanes.

The first consumer PCIe 4.0x1 10G NIC finally appeared nearly three years later, though not in a standalone product. Apple's M1 Max Mac Studio featured a Marvell Aquantia AQC113 NIC, representing a breakthrough in client-grade 10G networking by utilizing a single PCIe 4.0 lane.

Apple's AQC113 10GbE NIC with PCIe 4.0x1

Since then, enthusiasts have eagerly awaited a manufacturer to bring a dedicated PCIe 4.0x1 10G NIC to the market. In January 2025, that wait finally ended. Introducing IOCREST’s PCIe 4.0x1 10GbE NIC powered by the magical AQC113 chip.

The Heart of Modern 10GbE: Marvell Aquantia AQC113

The Marvell Aquantia AQC113 represents a significant advancement in consumer networking technology, combining versatility with high performance in a remarkably efficient package. As part of Marvell's Scalable mGig Ethernet controller family, it brings enterprise-grade networking capabilities to client systems through a highly integrated design that combines PCIe interface, MAC, and PHY components into a single solution.

What sets the AQC113 apart is its flexible PCIe implementation, which supports Gen4, Gen3, and Gen2 interfaces with configurable lane widths (x1/x2/x4), though its PCIe 4.0x1 capability is particularly noteworthy for modern systems. To maximize performance while minimizing CPU overhead, the AQC113 incorporates several advanced features:

  • Support for multiple speeds: 10GBASE-T, 5GBASE-T, 2.5GBASE-T, 1000BASE-T, 100BASE-T, and 10BASE-T
  • Hardware offloading capabilities including MSI, MSI-X, LSO, RSS, and IPv4/IPv6 Checksum
  • Energy Efficient Ethernet and Wake On LAN support for power management
  • Jumbo frame support up to 16KB for enhanced throughput in specific applications
  • Broad driver support covering Windows 10/11 (64-bit) and Linux 3.10+
  • UEFI and PXE remote boot capabilities

This comprehensive feature set, combined with its efficient single-lane PCIe 4.0 implementation, makes the AQC113 an ideal choice for modern high-performance networking solutions in a PCIe lane-constrained system. Its ability to deliver 10G speeds over a single PCIe 4.0 lane represents a significant milestone in consumer networking technology, offering a perfect balance of performance, compatibility, and efficiency.

The IOCREST PCIe 4.0x1 10GbE NIC

Unboxing

The IOCREST NIC arrives in a minimalist cardboard package, consistent with the company's no-frills approach to packaging that I've come to expect as a long-time user of their products. Inside, the NIC is properly protected in an anti-static bag, accompanied by product documentation and a miniCD containing drivers. A thoughtful inclusion is the low-profile bracket suitable for 1U/2U servers.

The NIC's design is simple. A substantial heatsink dominates the board, concealing the AQC113 chip underneath. The most striking feature is its PCIe x1 connector – a rarity among 10GbE NICs that typically use wider interfaces.

While the included miniCD contains drivers, modern systems often lack optical drives. Fortunately, Marvell provides current drivers for Windows 10/11 and Linux through their website. macOS users won't need additional drivers since this chipset is native to Mac systems, though they're likely better served by alternative solutions.

Installation

Installing the NIC is straightforward, but PCIe slot selection requires attention. While any PCIe 4.0 slot will work, the critical factor is ensuring PCIe 4.0 support. For instance, the x1 slot on my AsRock B650M PG Riptide only supports PCIe 3.0, unlike the PCIe 4.0x1 slots found on boards like the Gigabyte X570 Master. The NIC can be installed in larger PCIe 4.0 slots (x4, x8, or x16) while still utilizing just a single lane.

💡
If you intend to run this NIC in a PCIe 3.0x1 slot, it will get about 7-8Gbps, which is about 74% of the full 10GbE speed. In order to achieve full performance, a PCIe 4.0 slot is required.
💡
[Caution] Since PCIe 1x cards have a much smaller surface area than x4, x8 and x16 ones, DO NOT skip the set screw to secure the card to the case. The heavy Cat6A cable could rip the card out of the slot and cause damage to the card and the system.
NIC in x1 slot
NIC in x16 (electrically x4) slot

After physical installation, I tested the NIC on a Windows 10 LTSC system, which predictably didn't include native drivers.

Windows 10 LTSC - No Drivers

The driver installation process through Marvell's website was seamless.

Marvell's Drivers Download Page

Device Manager confirmed successful installation, and verification through system tools showed the NIC operating at PCIe 4.0 speeds.

Drivers installed successfully
Working at PCIe 4.0

With that, let’s get to our testing.

Testing

Test Platform:

  • CPU: AMD Ryzen 7 7700X
  • Motherboard: AsRock B650M PG Riptide
  • RAM: G.Skill FlareX DDR5 5600 32G
  • Test Suite: iperf3.17.1-win64
  • Test Server: M1 Mac Mini with Thunderbolt 10G NIC, point-to-point

Validatation:

Initial testing with basic iperf commands confirmed the NIC's advertised 10GbE capabilities while operating on PCIe 4.0x1.

Multi-Stream Test:

Four Streams, 5 Minutes

Running four parallel iperf streams for five minutes demonstrated impressive performance, with the NIC handling 331GB of data transfer. Thermal performance remained well controlled, with temperatures reaching only 39°C (15°C above ambient). Please do take into consideration that there was a two-slot GTX 1080 Ti sitting next to it.

Four Streams, 15 Minutes

Extended testing maintained consistent performance, transferring 994GB over 15 minutes. Temperature stabilized at 16°C above ambient, suggesting this represents the NIC's thermal ceiling under sustained load.

Eight Streams, 5 Minutes

For the "torture test," we ran eight parallel streams to fully saturate the 10GbE link as much as possible – a demanding scenario typically reserved for server-grade equipment. The NIC handled this challenge with remarkable composure, maintaining stable performance throughout. CPU utilization remained remarkably low, with the NIC consuming less than 1% CPU resources while the iperf process utilized approximately 2%.

Proxmox Usage

After a fresh installation of Proxmox Virtual Environment 8.3.2 (kernel version 6.8.12-5), we can see that the NIC works out-of-box.

lspci output
iperf3 with four streams and five minutes
💡
Aquantia has made a note in their drivers stating that if the intended use case for this NIC is to do routing and bridging, the LRO (Large Receive Offload) feature must be turned off. [Source]

If you are using it as a Proxmox bridge device and experienced problems, please refer to the instructions found in the source document to disable LRO.

PCIe 3.0x1 Performance

This is what it looked like when the card used the PCIe 3.0x1 slot on my B650 motherboard. The bandwidth is about 6.5 Gbps.

The same four streams, five minutes test, with PCIe 3.0x1 bandwidth

Conclusion

The IOCREST PCIe 4.0x1 10GbE NIC represents a significant milestone in consumer networking hardware. By leveraging the AQC113 controller and PCIe 4.0's enhanced bandwidth, it delivers full 10GbE performance through a single PCIe lane – an achievement that required four lanes in the PCIe 3.0 era. Throughout our testing, the NIC demonstrated remarkable stability and performance, handling intensive multi-stream workloads without breaking a sweat. Perhaps most impressively, it maintained cool operating temperatures and minimal CPU utilization even under sustained heavy loads.

What makes this product particularly noteworthy isn't just its technical achievements, but its broader implications for system builders and enthusiasts. In an era where PCIe lanes are increasingly precious resources, especially in consumer platforms, the ability to access 10GbE networking through a single PCIe 4.0 lane opens up new possibilities for system configuration and expansion. Whether you're a content creator needing high-speed network storage access, a home lab enthusiast, or simply someone looking to future-proof their network infrastructure, this NIC delivers quasi server-grade networking capabilities without the traditional PCIe lane tax. It's a testament to how modern standards like PCIe 4.0 can be leveraged to break through long-standing technical constraints, making high-performance networking more accessible and practical for mainstream users.

If you are interested in the product, you can find it in IOCREST’s Store on AliExpress. I am not affiliated with AliExpress and I don’t earn a commission from them.