New Windows-native NVMe driver benchmarks reveal transformative performance gains, up to 64.89% — lightning-fast random reads and breakthrough CPU efficiency

SSDs
(Image credit: Getty Images)

Microsoft's native NVMe driver will make the best SSDs even faster. Originally made available on Windows Server 2025, the performance gains also translate directly to consumer Windows 11 via simple registry hacks. News outlet StorageReview has put the new NVMe driver through its paces in its native habitat, yielding some very eye-popping results that will make any storage enthusiast's mouth water.

Secondly, the NVMe driver has shown a dramatic reduction in 4K and 64K random read latency. It enables faster response times across demanding workloads. By addressing bandwidth and latency, you can see the performance gains in latency-sensitive workloads.

Article continues below

Thirdly, and equally important, the NVMe driver has demonstrated the ability to reduce processor usage during sequential read and write operations regardless of block size. Through data transfer optimization, the processor overhead is lower, freeing up resources for other demanding workloads or background tasks. One potential benefit is lower power consumption, which is impactful for both mainstream consumers and enterprises.

StorageReview's test bench consisted of two 128-core AMD EPYC 9754 (codenamed Bergamo) processors, 768GB of DDR5-4800 memory, and 16 Solidigm P5316 30.72TB PCIe 4.0 SSDs in a JBOD configuration. The publication showed FIO benchmarks on Windows Server 2025 (OS Build 26100.32370).

Microsoft Native NVMe Driver Performance

Swipe to scroll horizontally
Header Cell - Column 0

Non-Native Driver

Native Driver

Improvement

4K Random Read (GiB/s)

6.1

10.058

+64.89%

64K Random Read (GiB/s)

74.291

91.165

+22.71%

64K Sequential Read (GiB/s)

35.596

35.623

+0.08%

128K Sequential Read (GiB/s)

86.791

92.562

+6.65

64K Sequential Write (GiB/s)

44.67

50.087

+12.13%

128K Sequential Write (GiB/s)

50.477

50.079

-0.79%

According to StorageReview’s benchmarks, random read performance saw the most significant gains, with 4K and 64K read speeds increasing by 64.89% and 22.71%, respectively. Sequential 64K reads remained within the margin of error. Notably, increasing the block size from 64K to 128K resulted in a further 6.65% performance boost.

In terms of sequential write performance, using a 64K block size delivered a notable 12.13% increase. However, raising the block size to 128K provided no additional benefit, as results remained virtually unchanged.

Swipe to scroll horizontally
Header Cell - Column 0

Non-Native Driver

Native Driver

Improvement

4K Random Read Latency (ms)

0.169

0.104

-38.46%

64K Random Read Latency (ms)

0.239

0.207

-13.39%

64K Sequential Write Latency (ms)

0.399

0.558

+39.85

128K Sequential Write Latency (ms)

1.022

1.149

+12.43%

Latency testing yielded mixed results. Random read latency improved significantly, with 4K and 64K read times dropping by as much as 38.46% and 13.39%, respectively.

In contrast, sequential write latency worsened. The 64K write latency increased sharply by 39.85%. However, it seems that you can mitigate the performance by switching to a 128K block size, where latency rose by only 12.43%. It's about one-third of the increase seen at 64K.

Swipe to scroll horizontally
Header Cell - Column 0

Non-Native Driver

Native Driver

Improvement

64K Sequential Read CPU Usage

44.89%

37.11%

-7.78%

128K Sequential Read CPU Usage

61.56%

49.56%

-12.00%

64K Sequential Write CPU Usage

70.44%

57.78%

-12.66%

128K Sequential Write CPU Usage

58.44%

47.33%

-11.11%

One area where the NVMe driver delivered equal performance gains was in processor usage, regardless of whether the operation was sequential read or write.

For sequential reads, 64K and 128K operations reduced processor activity by 7.78% and 12%, respectively. Sequential writes reflected similar gains, with 64K and 128K writes requiring 12.66% and 11.1% fewer processor resources.

Microsoft’s highly-awaited NVMe driver is a crucial update that the company should have arguably launched years ago. For almost a decade and a half, Windows users have been limited by Microsoft’s outdated storage stack, and it has been evident that it has struggled to keep pace with advances in SSD technology. With PCIe 5.0 SSDs delivering unprecedented performance and PCIe 6.0 drives on the horizon, the demand for a modern storage stack has never been greater.

The native NVMe driver (nvmedisk.sys) is in both Windows Server 2025 and Windows 11 25H2. Despite its presence, Microsoft doesn't enable the driver by default. Instead, it operates as an opt-in feature that Windows users need to enable via specific registry changes. The need for broader compatibility and support from third-party vendors influences Microsoft’s decision to keep the native NVMe driver as opt-in for now.

Google Preferred Source

Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Zhiye Liu
News Editor, RAM Reviewer & SSD Technician
  • ktosspl
    Welcome to 2014, windows. Linux has native clean io stack since version 3.3 without any legacy SCSI translation layer...
    Reply
  • palladin9479
    ktosspl said:
    Welcome to 2014, windows. Linux has native clean io stack since version 3.3 without any legacy SCSI translation layer..

    Windows doesn't have a SCSI translation layer, just the author explaining it the best they can

    The NT storage API only supported a single queue per disk device. NVMe supports multiple queues and acts more like RAM then disk storage. For a long time the Linux kernel also only supported a single queue per disk device. Not long ago I was having to balance virtual workloads across multiple LUNs for that precise reason.
    Reply