Flex Logix

Flex Logix Technologies, Inc. flex logix logo

Abstract

We offer Neural Inferencing co-processor chips, Semiconductor IP/Software and eFPGA Semiconductor IP/Software. Our products are based on our revolutionary, patented interconnects: XFLX, ArrayLinx and RAMLinx.

NMAX neural inferencing offers 1 to >100 TOPS: our architecture delivers throughput at 10x less cost/power than alternatives because we achieve high MAC utilization, so we need less silicon, and we do it with much less DRAM bandwidth, so we burn fewer $​’s and watts on DRAMs. As well, we do this with batch=1, critical for edge applications. NMAX is available now for TSMC16FFC/12FFC. The nnMAX Compiler programs nnMAX directly from Tensorflow/Caffe.

Our InferX X1 co-processor chip combines four nnMAX cores, 8 MB SRAM, PCIe and LPDDR interfaces and GPIO and provides higher throughput than existing edge inference chips and close to Data Center class inference cards. The InferX X1 chip only requires a single DRAM due to the high MAC utilization provided by the nnMAX architecture. In addition to the nnMAX Compiler, we provide a driver to easy integration of X1 either as a standalone chip or on a PCIe card.

EFLX eFPGA offers 1K to >250K LUT4 eFPGA arrays with DSP and RAM options. Our software can map Xilinx net lists onto our architecture so you can get started immediately. We were TSMC’s first eFPGA IP Alliance Partner. We have working silicon on TSMC 40/28/16/12 and we are in design for TSMC 7/7+. Our EFLX Compiler has a Xilinx-like GUI. We have multiple customers with working silicon and in design with more in the pipeline.

See more information at: www.flex-logix.com

Documents: Products overview, briefs and application notes can be found at the bottom of these web pages:

nnMAX and Inferx X1: https://flex-logix.com/inference/
EFLX embedded FPGA: https://flex-logix.com/efpga/

Slack: We will be available to chat with you anytime between 8:00 AM and 6:00 PM, Monday and Tuesday, August 17th and 18th.

Live Meetings: We will be online to talk with you live between 8:00 AM and 6:00 PM, Monday and Tuesday, August 17th and 18th.

We will also be demonstrating our nnMAX software that will inferencing model performance estimates, running a binary neural network model in embedded FPGA and how to use a modular approach to easy swap in and out FPGA based accelerators in and SoC.

Live Meeting Links:

Monday, Aug 17th Demos:

1:30 PM: nnMAX Software Model Performance Demonstration,

4:30 PM: Binary Neural Network in EFLX Embedded FPGA Demonstration

Tuesday, Aug 18th Demos: 10:00 AM: Modular Embedded FPGA Demo for Easy Accelerator Re-programmability

4:00 PM: nnMAX Software Model Performance Demonstration