You asked, we answered: Q&A on the NVMe Management Interface (NVMe-MI)

Blog

By Austin Bolen and Peter Onufryk

Rumor has it NVM Express technology is set to change the flash storage market, but how will each of the solutions in the portfolio of NVMe specifications play a part? In this post, we will delve into the NVMe Management Interface (NVMe-MI) specification, which was officially released in 2015 to improve the management of solid state drives.

Our latest webcast covered all things related to the NVMe-MI—from an overview of the architecture to walkthrough of the command set. If you missed it, you can still view it on-demand. There were too many thought-provoking questions from the live audience during the webinar that we did not have enough time to get through them all. That is why we decided to author this follow-up blog post to make sure the rest of your questions get answered.

Will the client market adopt NVMe-MI technology?
NVMe-MI technology targets enterprise and hyperscale applications and is not currently utilized in client applications. There is nothing about NMVe-MI that would technically prevent its use in client applications, it is just that the client market doesn’t need the same level of management capabilities.

How does NVMe-MI support NVMe over Fabrics (NVMe-oF)?
NVMe-MI 1.1 adds the ability to tunnel NVMe-MI commands over an NVMe Admin queue. NVMe-oF supports the NVMe Admin queue and so this mechanism can be used to send NVMe-MI command over NVMe-oF. However, the NVMe-MI commands were designed for PCIe connected NVMe storage device field-replaceable units. Some commands/parameters will not apply to an NVMe-oF implementation, so how NVMe-MI commands behave in an NVMe-oF implementation is currently outside the scope of the NVMe-MI specification.

Is emulation available in NVMe-MI?
We are not aware of any pure software emulations of out-of-band NVMe-MI. Some analyzers and exercisers can emulate NVMe-MI Management Controllers (host-BMC side) and Management Endpoints (NVMe-endpoint side).

NVMe-MI 1.1 adds the ability to tunnel NVMe-MI commands in-band which will allow existing in-band NVMe emulation implementations (e.g. QEMU) to support in-band NVMe-MI.

Since NVMe-MI can be used for enclosure management, are there non-NVMe based endpoints that can make use of some of the management features in the specification?

NVMe-MI 1.1 will support managing enclosure controllers, along with elements in the enclosure such as fans, LEDs, temperature sensors, etc. Non-NVMe PCIe technology endpoints (such as NICs, SAS controllers, GPUs, etc.) are not currently supported by NVMe-MI; however, this could change in the future.

Why would someone send an already available in-band NVMe command as an in-band NVMe-MI command?
NVMe-MI 1.1 will not support sending existing NVMe commands as in-band NVMe-MI commands. Instead, NVMe-MI 1.1 will only support sending NVMe-MI commands in-band, which are currently not allowed to be sent in-band.

Clarify the proposed chassis management under the NVMe-MI scope and provide examples.
NVMe-1.0 focused on the management of NVMe solid state drives. As the adoption of NVMe has grown, a need has emerged to standardize the management of enclosures that contain NVMe solid state drives. NVMe enclosure management provides the capabilities necessary to monitor and manage enclosure elements such as the power supplies, cooling devices, and indicators as well as the NVMe solid state drives contained within an enclosure. Examples of NVMe enclosures include everything from simple NVMe JBOFs (just a bunch of Flash) to high end storage arrays.

Is it feasible to execute out-of-band NVMe-MI over a very slow I2C bus? Is it mandatory to move to a higher rate of 1M/3.2M?
Yes, this is feasible and solutions are shipping today. Most packets in NVMe-MI are less than 64 bytes and very efficient. There are shipping implementations that run the clock at 100 KHz. Only a few commands are greater than this, such as reading large log pages or downloading large firmware images. Implementation that uses these features would want to use a higher clock rate on SMBus/I2C or use PCIe VDM instead of SMBus/I2C. NVMe-MI supports negotiating SMBus/I2C frequency up to 1 MHz.

NVMe-MI supports SMBus/I2C and PCIe as out-of-band interfaces. What are the pros and cons of each of these interfaces?

  • PCIe VDM offers higher bandwidth and does not require extra sideband I2C signals to be routed.
  • There can be additional security and error handling concerns with PCIe VDM that do not exist for SMBus/I2C.
  • SMBus/I2C can work in a situation where PCIe doesn’t—like if a system is in Auxiliary Power or PCIe link is disabled or down due to error.
  • SMBus/I2C will typically have less interference with power management than PCIe VDM (e.g. PCIe Link must be woken up for PCIe VDM but not for SMBus/I2C).
  • Devices can be managed via SMBus/I2C after power on but cannot be managed via PCIe VDM until platform software/firmware enumerates the device.

I2C/SMBus must operate in a multi-master mode to support MCTP. Are there plans for single master mode?
No single master mode operation is planned for NVMe-MI over MCTP. Appendix A in the NVMe-MI spec defines a basic single master management command.

MCTP is relay for I2C/SMBus. Is there provision for a lightweight management protocol, similar to the provision for the Simple Management Command?
The NVMe-MI protocol is lightweight for the basic commands. Typically, they are less than 64 bytes and operate well on a 100 KHz SMBus/I2C. Appendix A in the NVMe-MI spec also defines a lightweight basic management command that does not require any MCTP support.

Can you still use NVMe basic management command if you do not want to support MCTP?
Yes, NVMe basic management is available if MCTP is not supported.

Is the VPD discussed in this webcast related to the VPD as defined in the PCIe Base Specification? Or is the latter part of the NPEM implementation for simple management?
The VPD in NVMe-MI is unrelated to the VPD defined in the PCIe Base Specification. The VPD in NVMe-MI follows the IPMI Platform Management FRU Information Storage Definition specification. The VPD in the PCIe Base Specification is also unrelated to NPEM.

Some vendors do not support IPMI FRU VPD format. Is this a requirement for NVMe-MI?
Yes, compliance to the IPMI Platform Management FRU Information Storage Definition specification is required by NVMe-MI.

If you have more questions about NVMe-MI or more general NVMe technology, please contact us to learn more. Don’t forget, you can still view the full webcast on-demand and join us for our upcoming webcast The Evolution and Future of NVMe!