On Wednesday, Feb. 28, acadianacathedral.com will be upgraded between 6 p.m. - 12 a.m. PT. During this upgrade, the site may not behave as expected and pages may not load correctly. Thank you in advance for your patience.

Deep Learning Accelerators

Micron's Deep Learning Accelerator platform is a solution comprised of a modular FPGA-based architecture, powered by Micron memory, running FWDNXT’s high performance inference engine tuned for a variety of neural networks

 

Overview

Deep Learning uses neural networks to perform tasks such as machine vision, voice recognition and natural language processing. Trained models are often deployed across many devices such as cellphones to derive insight from new data – a process called inference.

Micron’s Deep Learning Accelerators enable inference tasks to be carried out much more rapidly, using much less energy, than general-purpose computers.

Developers can easily harness our technology explore and deploy Deep Learning models using a wide range of popular open source frameworks to solve previously-intractable data analytics problems from the edge to the cloud.

AI/DL Platform Overview

Fully Integrated SDK

  • Broad deep learning framework support
  • Easy to use toolset; software programmability
  • Ability to run on multiple neural networks simultaneously

Deep Learning Compiler

  • Compiles trained network directly to inference engine machine code
  • Underlying hardware is abstracted away
  • Feature rich & designed to optimize for specific use cases

Advanced Accelerators

  • FPGA solutions with high performance memory available for prototyping or early deployments
  • Scale with multiple cores
  • Integrated with SDK to fully utilize silicon resources

Any Neural Network. Any Framework.

Micron's DLAs are compatible with popular deep-learning frameworks and capable of running all state-of-the-art neural networks.

Various DLA neural networks and framweworks

Ease of Use

Simply change three lines from your existing code to go from CPU, to GPU, to Micron DLA.

Computer screenshot of going from a CPU to DLA

Innovation in Neural Nets: Multi-task

The DLA SDK allows you to run multiple neural networks on one DLA device, or spread one neural network across multiple DLA cores.

Image of a DLA multitasking

Custom Compiler

The Micron DLA comes bundled with a state-of-the-art compiler that converts neural networks into machine code for the DLA. A run-time component provides execution of your applications.

Chart of DLA custom compiler

Advanced Accelerators

AC Series HPC Modules

Our business card-sized, high-performance computing (HPC) modules are the foundational building blocks for a truly scalable system based on the PCI Express® (PCIe®) standard.

HPC Single-Board Accelerators

Highest performance. Our single-board compute solutions combine FPGA-based compute power with our fastest memory technologies to meet the most challenging system requirements.

Find out more about Deep Learning Accelerators

Contact us
+