www.design-reuse-embedded.com
Search Solutions  
OK
9 "Artificial Intelligence Processor IP->Accelerator" IP

1
Edge AI/ML accelerator (NPU)
TinyRaptor is a fully-programmable AI accelerator designed to execute deep neural networks (DNN) in an energy-efficient way. TinyRaptor reduces the inference time and power consumption needed to run ...

2
AI Accelerator: Neural Network-specific Optimized 1 TOPS
The Expedera Origin™ E1 is a family of Artificial Intelligence (AI) processing cores individually optimized for a subset of neural networks commonly used in home appliances, edge nodes, and other smal...

3
Deeply Embedded AI Accelerator for Microcontrollers and End-Point IoT Devices
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture and simple programmer’s model to enable virtually any class of neural net...

4
High-Performance Edge AI Accelerator
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture and simple programmer’s model to enable virtually any class of neural net...

5
NMP-300 - Lowest Power and Cost End Point Accelerator

The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture and simple programmer's model to enable virtually any class of neu...


6
NMP-500 - Performance Efficiency Accelerator

The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture and simple programmer's model to enable virtually any class of neu...


7
NMP-700 - Performance Accelerator for Edge Computing

The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture and simple programmer's model to enable virtually any class of neu...


8
Performance Efficiency Leading AI Accelerator for Mobile and Edge Devices
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture and simple programmer’s model to enable virtually any class of neural net...

9
EFFICIERA Ultra low power AI inference accelerator
Efficiera is an ultra-low power AI inference accelerator IP specialized for CNN inference processing that runs as a circuit on FPGA or ASIC devices. The extremely low bit quantization technology min...

 Back

Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2024 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.