Country for PR: United States
Contributor: PR Newswire New York
Friday, December 20 2019 - 00:00
AsiaNet
AImotive's latest aiWare3P delivers superior NN acceleration for production L2-L3 automotive AI
BUDAPEST, Hungary, Dec. 19, 2019 /PRNewswire-AsiaNet/ --

--Latest release of aiWare3 hardware IP includes significantly better host CPU 
offload, lower memory bandwidth and upgraded SDK tools to enable scalable, 
low-power, low latency solutions up to 100+ TOPS

AImotive, one of the world's leading suppliers of modular automated driving 
technologies, announced that it has begun shipment of the latest release of its 
acclaimed aiWare3 NN (Neural Network) hardware inference engine IP. The 
aiWare3P IP core incorporates new features that result in significantly 
improved performance, lower power consumption, greater host CPU offload and 
simpler layout for larger chip designs. 

Logo - https://mma.prnewswire.com/media/777482/ai_motive_landscape_logo_Logo.jpg

"Our production-ready aiWare3P release brings together everything we know about 
accelerating neural networks for vision-based automotive AI inference 
applications;" said Marton Feher, senior vice president of hardware engineering 
for AImotive. "We now have one of the automotive industry's most efficient and 
compelling NN acceleration solutions for volume production L2/L2+/L3 AI."

Each aiWare3P hardware IP core offers up to 16 TMAC/s (>32 TOPS) at 2GHz, with 
multi-core and multi-chip implementations capable of delivering up to 50+ 
TMAC/s (>100 INT8 TOPS). The core is designed for AEC-Q100 extended temperature 
operation and includes a range of features to enable users to achieve ASIL-B 
and above certification. Key upgrades include:

    --  Enhanced on-chip data reuse and movement, scheduling algorithms and 
        external memory bandwidth management 
    --  Improvements ensure that 100% of most NNs execute within the aiWare3P 
        core without host CPU intervention 
    --  Range of upgrades reducing external memory bandwidth requirements 
    --  Advanced cross-coupling between C-LAM convolution engines and F-LAM 
        function engines 
    --  Physical tile-based microarchitecture, enabling easier physical 
        implementation of large aiWare cores 
    --  Logical tile-based data management, enabling efficient workload 
        scalability up to the maximum 16 TMAC/s per core 
    --  Significantly upgraded SDK, including improved compiler and new 
        performance analysis tools

The aiWare3P hardware IP is being deployed in L2/L2+ production solutions, as 
well as studies of advanced heterogeneous sensor applications.  Customers 
include Nextchip for their forthcoming Apache5 Imaging Edge Processor, and ON 
Semiconductor for their collaborative project with AImotive to demonstrate 
advanced heterogeneous sensor fusion capabilities.

As part of their commitment to open benchmarking using well-controlled 
benchmarks reflecting real applications, AImotive will be releasing a full 
update to their public benchmark results in Q1 2020 based on the aiWare3P IP 
core. 

The aiWare3P RTL will be shipping from January 2020.

Media Contacts:

Imre Dozsa                    Daniel M Seager-Smith
CMO                           Marketing Manager
imre.dozsa@aimotive.com       daniel.seager@aimotive.com

SOURCE: AImotive
Translations

Japanese