HOBFLOPS CNNS: HARDWARE OPTIMIZED BITSLICE-PARALLEL FLOATING-POINT OPERATIONS FOR CONVOLUTIONAL NEURAL NETWORKS

Convolutional neural networks (CNNs) are typically trained using 16or 32-bit floating-point (FP) and researchers show that low-precision floating-point (FP) can be highly effective for inference. Low-precision FP can be implemented in field programmable gate array (FPGA) and applicationspecific integrated circuit (ASIC) accelerators, but existing processors do not generally support custom precision FP. We propose hardware optimized bitslice-parallel floating-point operators (HOBFLOPS), a method of generating efficient custom-precision emulated bitslice-parallel software FP arithmetic. We generate custom-precision FP routines optimized using a hardware synthesis design flow to create circuits. We provide standard cell libraries matching the bitwise operations on the target microprocessor architecture, and a code-generator to translate the hardware circuits to bitslice software equivalents. We exploit bitslice parallelism to create a very wide (32–512 element) vectorized convolutional neural network (CNN) convolution. Hardware optimized bitslice-parallel floating-point operators (HOBFLOPS) multiply-accumulate (MAC) performance in CNN convolution on Arm and Intel processors are compared to Berkeley’s SoftFP16 equivalent MAC. HOBFLOPS16 outperforms SoftFP16 by 8× on Intel AVX512. HOBFLOPS offers arbitrary-precision FP with custom range and precision e.g., HOBFLOPS9 performs at 6× the performance of HOBFLOPS16 on Arm Neon. HOBFLOPS allows researchers to prototype different levels of custom FP precision in the arithmetic of software CNN accelerators. Furthermore, HOBFLOPS fast custom-precision FP CNNs may be valuable in cases where memory bandwidth is limited.


Introduction
Many researchers have shown that CNN inference is possible with low-precision integer [1] and floating-point (FP) [2,3] arithmetic. Almost all processors provide excellent support for 8-bit integer values, but not for bit-level custom precision FP types, such as 9-bit FP. Typically processors support a small number of relatively high-precision FP types, such as 32-and 64-bit. However, there are good reasons why we might want to implement custom-precision FP on regular processors. Researchers and hardware developers may want to prototype different levels of custom FP precision that might be used for arithmetic in CNN accelerators. Furthermore, fast custom-precision FP CNNs in software may be valuable in its own right, particularly in cases where memory bandwidth is limited.
While the choice of custom FP in general purpose central processing units (CPUs) is limited, FP simulators, such as the excellent Berkeley's SoftFP [4], are available. These simulators support certain ranges of custom FP such as 16-, 32-, 64-, 80-and 128-bit with corresponding fixed-width mantissa and exponents.
We propose hardware optimized bitslice-parallel floating-point operators (HOBFLOPS) which offers arbitrary-precision FP arithmetic, using software bitslice parallel [5] arithmetic, emulating any required precision FP arithmetic at any bit-width of mantissa or exponent. Our goal is to use bit-slice packing to pack the vector registers of the microprocessor efficiently. Also, we exploit the bitwise logic optimization strategies of a commercial hardware synthesis tool to optimize the associated bitwise arithmetic that forms a multiplier and adder. We generate efficient arbitrary-precision software FP emulation types and arithmetic, optimized using hardware tools and converted to the target processor's bitwise logic operators.
Existing methods of emulating FP arithmetic in software primarily use existing integer instructions to implement the steps of FP computation. This can work well for large, regular-sized FP types such as FP16, FP32, or FP64. Berkeley offer SoftFP emulation [4] for use where, for example, only integer precision instructions are available. SoftFP emulation supports 16-to 128-bit arithmetic and does not support low bit-width custom precision FP arithmetic or parallel arithmetic. HOBFLOPS offers fine grained customizable precision mantissa and exponent FP bitwise arithmetic computed in parallel. To evaluate performance we benchmark HOBFLOPS16 parallel MACs against Berkeley's SoftFP16 MulAdd, in CNN convolution implemented with Arm and Intel scalar and vector bitwise instructions. We show HOBFLOPS offers significant performance boosts compared to SoftFP. We then evaluate HOBFLOPS8-HOBFLOPS16e parallel arbitrary-precision performance. We argue that our software bitslice parallel FP is both more efficient and offers greater bit-level customizability than conventional software FP emulation.
We make the following contributions: • We present a full design flow from a VHDL FP core generator to arbitrary-precision software bitslice parallel FP operators, optimized using hardware design tools with our logic cell libraries, and our domain-specific code generator. • We demonstrate how 3-input Arm NEON bitwise instructions e.g., SEL (multiplexer) and AVX512 bitwise ternary operations can be used in standard cell libraries to improve the efficiency of the generated code significantly. • We present an algorithm for implementing CNN convolution with the very wide vectors that arise in bitslice parallel vector arithmetic. • We evaluate HOBFLOPS16 on Arm Neon and Intel AVX2 and AVX512 processors and find HOBFLOPS achieves approximately 0.5×, 2.5×, and 8× the performance of Berkeley SoftFP16 respectively. • We evaluate various widths of HOBFLOPS from HOBFLOPS8-HOBFLOPS16e and find e.g., HOBFLOPS9 performs at approximately 45 million MACs/second on Arm Neon processor around 6× that of HOBFLOPS16, and 2 billion MACs/second on an Intel AVX512 platform, around 5× that of HOBFLOPS16. The increased performance is due to: -Bitslice parallelism of the very wide vectorization of the MACs of the CNN; -Our efficient code generation flow.
The rest of this article is organized as follows. Section 2 gives background on other CNN accelerators use of lowprecision arithmetic types. Section 3 outlines bitslice parallel operations and introduces HOBFLOPS, shows the design flow, types supported and how to implement arbitrary-precision HOBFLOPS FP arithmetic in a convolution layer of a CNN. Section 4 shows results of our comparisons of HOBFLOPS16 to Berkeley SoftFP16 on Intel's AVX2 and AVX512 processors and show significant increases in performance. We also show results for HOBFLOPS8-HOBFLOPS16e emulation implemented on Arm Neon, Intel AVX2 and AVX512 processors. We conclude with Section 5.

Background
Reduced-precision CNN inference, particularly weight data of CNNs, reduces computational requirements due to memory accesses, which dominate energy consumption. Energy and area costs are also reduced in ASICs and FPGAs [6].
Kang et al., [7] investigate short, reduced FP representations that do not support not-a-numbers (NANs) and infinities. They show that shortening the width of the exponent and mantissa reduces the computational complexity within the multiplier logic. They compare fixed point integer representations with varying widths up to 8-bits of their short FP in various CNNs, and show around a 1% drop in classification accuracy, with more than 60% reduction in ASIC implementation area. Their work stops at the byte boundary, leaving other arbitrary ranges open to investigation.
For their Project Brainwave [2,3], Microsoft proposes MS-FP8 and MS-FP9, which are 8-bit and 9-bit FP arithmetic that they exploit in a quantized CNN [2,3]. Microsoft alters the Minifloat 8-bit that follows the IEEE-754 specification (1-sign bit, 4-exponent bits, 3-mantissa bits) [8] by creating MS-FP8, of 1-sign bit, 5-exponent bits, and 2-mantissa bits. MS-FP8 gives a larger representative range due to the extra exponent bit but lower precision than Minifloat, caused by the reduced mantissa. MS-FP8 more than doubles the performance compared to 8-bit integer operations, with negligible accuracy loss compared to full float. To improve the precision, they propose MS-FP9, which increases the mantissa to 3 bits and keeps the exponent at 5 bits. Their later work [3] uses a shared exponent with their proposed MS-FP8 / MS-FP9, i.e., one exponent pair used for many mantissae, sharing the reduced mantissa multipliers, something this work does not investigate. Their work remains at 8-and 9-bit for FPGA implementation and leaves the research area open for other bit-precision and range investigation.
Rzayev et al.'s, Deep Recon work [9] analyzes the computation costs of deep neural networks (DNNs) and proposes a reconfigurable architecture to efficiently utilize computation and storage resources, thus allowing DNN acceleration. They pay particular attention to comparing the prediction error of three CNNs with different fixed and FP precision. They demonstrate that FP precision on the three CNNs is 1-bit more efficient than fixed bit-width. They also show that the 8-bit FP is around 7× more energy-efficient and approximately 6× more area efficient than 8-bit fixed precision.
The Flex Float C++ library proposed by Tagliavini et al., [10] offers alternative FP formats with variable bit-width mantissa and exponents. They demonstrate their Flex Float with two novel FP formats, binary8 and binary16alt. Using these two new formats, they demonstrate very efficient use in algorithms such as K-nearest neighbour (KNN) and CNN. They do not explore arbitrary bit-precision or other optimization techniques on the proposed number formats.
Other researchers investigate optimizing different representations of FP arithmetic. Xu et al., [5] propose bitslice parallel arithmetic and present FP calculations undertaken on a fixed point unit. Instead of storing vectors in the traditional sense of storing 17-bit vectors inefficiently in a 32-bit register, they instead store thirty-two 17-bit words transformed into bitslice parallel format and stored in memory. Xu et al., manually construct bitwise arithmetic routines to perform integer or FP arithmetic, while the vectors remain in a bitslice parallel format. When coded in C/C++ and AVX2 single instruction multiple data (SIMD) instructions, they demonstrate this approach is efficient for low-precision vectors, such as 9-bit or 11-bit arbitrary FP types. Xu et al.'s, work manually optimizes the arithmetic leaving automating the process an interesting area for investigation.
Researchers investigate different bit precision and representations of the FP number base. Google's tensor processing unit (TPU) ASIC [1] implements brain floating point 16-bit (bfloat16) [11], a 16-bit truncated IEEE-754 FP singleprecision format. Bfloat16 preserves dynamic range of the 32-bit format due to the 8-bit exponent. The precision is reduced in the mantissa from IEEE's 24-bits down to 7-bits. Bfloat16 is not restricted to machine learning; Intel Nervana use it in their network processing unit (NPU), Intel Xeon processors support AVX512 bfloat16 extensions, Intel Altera's FPGAs use it, Arm's ARMv8.6-A architecture, and ROCm libraries use bfloat16. This work only reduces the bit-width of the mantissa and does not investigate different exponent or mantissa bit-widths. Our work addresses these arbitrary exponent and mantissa bit-widths.
Nvidia has proposed the new tensor float 32 (TP32) number representation [12], which is a sign bit, 8-bit exponent and 10-bit mantissa. This 19-bit floating-point format is an input format which truncates the IEEE FP32 mantissa to 10-bits. TP32 still produces 32-bit floating-point results to move and store in the graphics processor unit (GPU). Similar to Google, Nvidia does not investigate other bit-widths of FP range and precision, something our work does.

Approach
In this section, we present our approach to producing HOBFLOPS arithmetic units that we demonstrate in a CNN convolution layer. HOBFLOPS is a method for generating efficient software emulation parallel FP arithmetic units optimized using hardware synthesis tools. HOBFLOPS investigates reduced complexity FP [7] that is more efficient than fixed-point [9] by considering alternative FP formats [10] and register packing with bit-sliced arithmetic [5].
We produce standard cell libraries of logic gate cells supported by the bitwise logic instructions of the target microprocessor architecture. For example, AND, OR, XOR, the SEL (multiplexer) bitwise instructions are supported on Arm Neon. The ternary logic look up table (LUT) bitwise instructions of the Intel AVX512 are supported.
We use the ASIC synthesizer tool, Cadence Genus in conjunction with our standard cell libraries and small gate area optimization and automation script to synthesize the adders and multipliers into Verilog netlists. The open-source synthesis and technology mapping suite, Yosys ASIC synthesizer [14] and ABC optimizer [15], allows us to optimize further and topologically sort the netlists with the same cell libraries.
Our custom domain-specific source to source generator converts the topologically sorted netlists into a C/C++ header of bitwise operations. In parallel, we convert the hardware cell libraries into the equivalent C/C++ cell library headers of the target processor instruction set architecture (ISA). We create a CNN convolution layer to include the HOBFLOPS adder and multiplier and cell library headers corresponding to the target ISA, and compile with G++.

Arm Neon, Intel AVX2 and AVX512 Cell Libraries
We create three Synopsys Liberty standard cell libraries supporting the equivalent hardware gate-level representations of Arm Neon Intel X86_64, AVX, AVX2 and AVX512 SIMD vector intrinsics. The Arm Neon SEL (multiplexer) bitwise multiplexer instruction is a 3-input bitwise logic instruction, whereas all other Neon bitwise logic instructions modeled in the cell library are 2-input. The ternary logic LUT of the Intel AVX512 is a 3-input bitwise logic instruction that can implement 3-input boolean functions. An 8-bit immediate operand to this instruction specifies which of the 256 3-input functions should be used. We create all 256 equivalent cells in the Liberty cell library when targeting AVX512 devices. Table 1 lists the bitwise operations supported in the cell libraries for each architecture. The AVX512 column shows  To demonstrate the capabilities of the cell libraries, we show an example of a single bit full adder. Figure 2a shows a typical 5-gate full adder implemented with our AVX2 cell library. The C code function of Listing 1 demonstrates these gates in code, if W IDT H is set to 1. The same full adder can be implemented in three Arm Neon bitwise logic instructions, Figure 2b, one of which is the SEL bitwise multiplexer instruction. Intel AVX512 intrinsics can implement the full adder in two 3-input bitwise ternary instructions, Figure 2c. While the inputs and outputs of the hardware gate level circuits are single bits, these inputs and outputs are parallelized by the bit-width of the SIMD vector registers. The bitwise instructions, when converted to bitwise software operations, produce extensive parallel scaling of the arithmetic.

Bitslice Parallel Operations
HOBFLOPS exploits bitslice parallel operations to represent FP numbers in a bitwise manner that are processed in parallel. For example, many 9-bit values are transformed to bitslice parallel representations, see Figure 3a. A simple example of how nine registers of 512-bit bitslice parallel data are applied to a 512-bit wide bitslice parallel FP adder is shown in Figure 3b. Each adder instruction has a throughput of around half a clock cycle (see Intel's Intrinsics Guide for details of the precise throughput of SSE, AVX2 and AVX512 logic Bitwise Operations and Arm's Intrinsics Reference for details of Arm Neon bitwise operational throughput). In this example, the adder's propagation delay is related to the number of instruction-level parallelism and associated load/store commands. The number of gates in the HOBFLOPS adder or multiplier is dependent on the required HOBFLOPS precision, see Table 3 for examples of HOBFLOPS MAC precision, and Section 4 for associated HOBFLOPS MAC gates counts and performance.

Design Flow
We investigate whether bitslice parallel logic can be optimized using hardware tools to reduce the hardware logic gate count or area, and subsequent lines of bitwise software operations. We use the industry-standard hardware ASIC synthesizer, Cadence Genus, with our custom-designed logic cell libraries to synthesize and optimize the bitslice parallel   FP arithmetic. We use Yosys ASIC synthesizer [14] and ABC optimizer [15] to further optimize and topologically sort the netlist.
Taking inspiration from Microsoft's MS-FP8 / MS-FP9 [2,3] and Minifloat 8-bit that follows the IEEE-754 2019 specification, we create single-precision and extended-precision HOBFLOPS adders and multipliers. For example, we create the single-precision HOBFLOPS8 multiplier to take two 5-bit exponent, 2-bit mantissa, 1-bit sign inputs and produce a single 5-bit exponent and 3-bit mantissa and 1-bit sign output. We also create extended-precision HOBFLOPS8e multiplier to take two 5-bit exponent, 2-bit mantissa, and 1-bit sign to produce a single 5-bit exponent, 5-bit extended mantissa and 1-bit sign output.
When using HOBFLOPS, any quantization method may be employed, such as those investigated by Gong et al., [16].
These quantized values need to be stored and computed during inference mode. Therefore we set FloPoCo to the required range and precision. For comparison, Table 2 shows the range and precision of IEEE-FP8 of MS-FP8 and MS-FP9, respectively, available in simulated software and FPGA hardware. These FP solutions only support specific ranges and do not allow for arbitrary exponent and mantissa values.
Details of the evaluated HOBFLOPS are in Table 3, although, other arbitrary combinations of mantissa and exponent bit-widths are supported in the flow (see Figure 1).
FloPoCo [13], generates VHDL descriptions of FP adders and multipliers of varying exponent and mantissa bit-widths for standard and extended precision HOBFLOPS types, see Table 3. As a performance baseline, we produce the 16  fields: 2-bit exception field (01 for normal numbers); a sign bit; an exponent field wE bits wide; a mantissa (fractional) field wF bits wide. The significand has an implicit leading 1, so the fraction field ff...ff represents the significand 1.ff...ff.
We configure FloPoCo to generate combinatorial plain RTL VHDL cores with a 1MHz frequency, no pipelining, and no use of hard-macro FPGA multipliers or adders. These settings ensure that FloPoCo generates reduced area rather than reduced latency multipliers and adders. We simulate the FP multipliers and adders in a VHDL simulator with the corresponding FloPoCo generated test bench to confirm that the quantized functionality is equivalent to IEEE-754 FP multiplier and adder.
We create Synopsys Liberty standard cell libraries to support the target processor architecture. Cadence Genus (version 16.22-s033_1) the industry-standard ASIC synthesis tool, synthesizes the adder and multiplier VHDL cores with our standard cell libraries and configuration and small gate area optimisation script into a Verilog netlist of the logic gates. See Figure 4 for an example of the HOBFLOPS8 multiplier logic produced by FloPoCo when synthesized with our Arm Neon cell Library. Note how Genus has synthesized the design to include the 3-input SEL gate (multiplexer) supported by Arm Neon.
HOBFLOPS designs are combinatorial, so synthesis timing constraints are unnecessary. In the standard cell libraries Liberty file, we assign a value of 1.0 to cell area and cell leakage power of the cells. We configure the cell capacitance and timing values to zero. These values ensure the synthesizer assigns equal optimization priority to all gates and produces a netlist with the least number of logic gates rather than creating a netlist optimized for hardware timing propagation.
We further optimize the netlist using the open-source Yosys ASIC synthesizer [14] and ABC optimizer [15]. We use ABC's strash command to transform the current network into an AND-inverter graph (AIG) by one-level structural hashing. We then use the refactor function to iteratively collapse and refactor the levels of logic and area of the netlist. We configure Yosys to produce a topologically sorted Verilog netlist of gates. The topological sorting is required as Cadence Genus writes the netlist file in an output port to input port order, whereas the C/C++ compiler requires the converted netlist to have input to output ordering. We formally verify the topologically sorted netlist against the original netlist with Yosys satisfiability (SAT)-solver. These netlists are re-simulated with the test bench used to simulate the FloPoCo generated VHDL designs and compared for correlation.
Our domain-specific source to source generator translates the Verilog adder and multiplier netlists to Intel AVX2, AVX512, or Arm Neon bitwise operators with the correct types and pointers. Algorithm 1 shows the HOBFLOPS code for a simple 2-bit binary adder with carry, generated from 12 bitwise operations, referenced from the cell library of Listing 2. A single input bit of the hardware multiplier becomes the corresponding architecture variable type e.g., uint64 for a 64-bit processor, an __mm256i type for an AVX2 processor, an __mm512i type for an AVX512 processor, uint32x4_t type for a Neon processor. Algorithm 2 demonstrates the efficiency of the AVX512 implementation of the same simple 2-bit adder, generated from 4 bitwise operations.
A HOBFLOPS8 multiplier targeted at the AVX2 processor, for example, is generated in 80 bitwise operations, and a HOBFLOPS8 adder is generated in 319 bitwise operations. When targeted at the AVX512 processor, the HOBFLOPS8 multiplier is generated in 53 bitwise operations, and the HOBFLOPS8 adder is generated in 204 bitwise operations.

CNN Convolution with HOBFLOPS
We present a method for CNN convolution with HOBFLOPS arithmetic. We implement HOBFLOPS MACs in a CNN convolution layer, where up to 90% of the computation time is spent in a CNN [17]. We compare HOBFLOPS MAC performance to IEEE FP MAC and to Berkeley's SoftFP16 MulAdd function [4]. If HOBFLOPS is implemented in a multi-layer CNN, the data between each layer could remain in HOBFLOPS format until the last convolution layer. The OFM at the last convolutional layer could be transformed from HOBFLOPS values to floats resulting in the transformation overhead only occurring at the first and last convolutional layers of the CNN model. An additional pooling layer could be developed in the HOBFLOPS format, for the interface between the last convolutional layer and the fully connected layer of MobileNets, not done in this work.
We repeat the above for MACs up to HOBFLOPS16e on each processor architecture up to the 512-bit wide AVX512 registers.

Evaluation
We implement each of the 8-to 16e-bit HOBFLOPS multipliers and adders in a convolution layer of the MobileNets CNN [18]. We use layer Conv dw / s2 of MobileNets as it has a high number of channels C and kernels M , perfect for demonstrations of high-dimensional parallelized MAC operations. We compare the HOBFLOPS16 multipliers and adders round-to-nearest-ties-to-even and round-towards-zero modes performance to Berkeley's SoftFP16 MulAdd rounding near_even and round min modes [4]. This 16-bit FP comparison acts as a baseline as Soft FP8 is not supported by Berkeley's emulation tool.
Three machine types are used to test the HOBFLOPS MAC: • Arm Cortex-A15 Neon embedded development kit, containing an ARMv7 rev 3 (v7l) CPU at 2GHz and 2GB RAM; (a) Arm Neon HOBFLOPS8-16e MACs Round-to-Nearest-Ties-To-Even Throughput -higher is better.
(b) Arm Neon HOBFLOPS8-16e MACs Round-Towards-Zero Throughput -higher is better. Our cell libraries model-specific cells, see Table 1. We omit the bit clear (BIC) of the Arm Neon in our cell library. Inclusion of bit clear (BIC) prevents the synthesis tool, Cadence Genus, from optimizing the netlist with SEL (multiplexer) units, leading to a less area efficient netlist.
To further decrease area and increase performance, we produce round-towards-zero versions of the HOBFLOPS8-HOBFLOPS16e adders as the rounding can be dealt with at the end of the layer in the activation function, assuming the non-rounded part of the FP value is retained through to the end of the layer.
The MACs per second of an average of 1000 iterations of a HOBFLOPS adders and multipliers are captured and compared. We use the GNU G++ compiler to optimize the code for the underlying target microprocessor architecture and numbers of registers. We compile the HOBFLOPS CNN code (see Figure 5) to include our HOBFLOPS adders and multipliers, and our cell library (e.g., Listing 2 for AVX2) with G++ (version 8.2.1 20181127 on Arm, version 9.2.0 on Intel AVX2 and version 6.3.0 20170516 on Intel AVX512 machines). We target C++ version 17 and using -march=native, -mtune=native, -fPIC, -O3 compiler switches with −msse for SSE devices and -mavx2 for AVX2 devices. When targeting an Intel AVX512 architecture we use the the -march=skylake-avx512, -mtune=skylake-avx512, -mavx512f, -fPIC, -O3 switches. When targeting an Arm Neon device we use -march=native, -mtune=native, -fPIC, -O3, -mfpu=neon to exploit the use of Neon registers.
After the G++ compilation, we inspect the assembler object dump. Within the multiplier and adder units, we find an almost one-to-one correlation of logic bitwise operations in the assembler related to the gates modeled in the cell libraries, with additional loads/stores where the compiler has seen fit to implement.

Arm Cortex-A15
We configure an Arm Cortex-A15 Development kit with 2GB RAM, ARCH Linux version 4.14.107-1-ARCH installed, and fix the processor frequency at 2GHz. We run tests for 32-, 64-and 128-lanes and capture performance. We use taskset to lock the process to a core of the machine for measurement consistency. Figure 6a shows 128-lane round-to-nearest-ties-to-even performance for all arbitrary-precision HOBFLOPS FP between 8-and 16e-bits, IEEE 8-and 32-bit equivalents and Berkeley's SoftFP versions. HOBFLOPS16 round-to-nearest-tiesto-even achieves approximately half the performance of SoftFP16 MulAdd rounding near_even mode on Arm Neon. However, HOBFLOPS offers arbitrary precision mantissa and exponent FP between 8-and 16-bits, outperforming SoftFP16 between HOBFLOPS8 and HOBFLOPS11 bits.
Similarly, HOBFLOPS16 round-towards-zero version shown in Figure 6b demonstrates a slight improvement in performance compared to Berkeley's SoftFP16 MulAdd rounding min mode. Figure 6b also shows HOBFLOPS roundtowards-zero versions have an increased performance when compared to HOBFLOPS16 round-to-nearest-ties-to-even.  While there is 1-bit more in the input mantissa of HOBFLOPS9 compared to HOBFLOPS8e, which leads to HOBFLOPS9 containing larger adder/accumulators, Figure 6b shows the round-towards-zero HOBFLOPS9 functionality almost counter-intuitively exhibiting slightly greater throughput than HOBFLOPS8e. The greater throughput of the round-towards-zero HOBFLOPS9 is due to the lack of rounding adder, reduced gate area and latency.
The low bit-width and thus reduced hardware synthesis gate count or area as seen in Figure 7a and Figure 7b would benefit memory storage and bandwidth within the embedded system allowing for reduced energy consumption, however, energy consumption is not measured here.

Intel AVX2
We configure an Intel Core i7-8700K desktop machine with 32GB RAM, and ARCH Linux 5.3.4-arch1-1 installed. For consistency of performance measurements of various HOBFLOPS configurations, within the BIOS we disable: • Intel's SpeedStep (i.e., prevent the CPU performance from ramping up and down); • Multi-threading (i.e., do not split the program into separate threads); • TurboBoost (i.e., keep all processor cores running at the same frequency); • Hyperthreading Control (i.e., keep one program on one processor core); • C-States control (i.e., prevent power saving from ramping down the core clock frequency).
We alter GRUB's configuration so intel_pstate (i.e., lock the processor core clock frequency) and intel_cstate are disabled on both GRUB_CMDLINE_LINUX and GRUB_CMDLINE_LINUX_DEFAULT. This BIOS and Linux Kernel configuration ensures the processor frequency is fixed at 4.6GHz, no power-saving, and each HOBFLOPS instance running at full performance on a single thread and single CPU core. When executing the compiled code, taskset is used to lock the process to a single core of the CPU. These configurations allow a reproducible comparison of timing performance of each HOBFLOPS configuration against Berkeley's SoftFP16.
We run tests for 32-, 64-, 128-and 256-lanes and capture performance. Figure 8a shows 256-lane round-to-nearest-tiesto-even results for all arbitrary-precision HOBFLOPS FP between 8-and 16e-bits, IEEE 8-and 32-bit equivalents and Berkeley's SoftFP versions. HOBFLOPS16 performs over 2.5× higher MACs/second when compared to Berkeley's SoftFP16 MulAdd rounding near_even mode. The round-towards-zero version of HOBFLOPS16 performs at around 2.7× higher MACs/second when compared to Berkeley's SoftFP16 MulAdd rounding min mode. In fact, HOBFLOPS outperforms SoftFP16 for all versions of between HOBFLOPS8 and HOBFLOPS16e for both round-to-nearest-ties-toeven and round-towards-zero rounding modes.
HOBFLOPS8 performance gain is due to reduced synthesis area of the HOBFLOPS units as seen in Figure 8b. Again, this reduction in area, also seen for round-towards-zero, is key to reduced SIMD bitwise operations being created for the HOBFLOPS MACs and therefore reduced latency through the software HOBFLOPS MACs.
(b) Intel AVX2 HOBFLOPS8-16e MAC Gate Count: Round To Nearest, Ties To Even -lower is better.

Intel AVX512
We configure an Intel Xeon Gold 5120 server with 256GB RAM, and Debian Linux 4.9.189-3+deb9u2. This shared server-grade machine BIOS or clock could not be changed as done for the AVX2-based machine. However, taskset is used to lock the process to a single CPU core.
We run tests for 32-, 64-, 128-, 256-and 512-lanes. Figure 9a captures  HOBFLOPS performance is due to HOBFLOPS lower hardware synthesis area, which when the netlists are converted to software bitwise operations, translates to fewer SIMD 3-input ternary logic LUT instructions in the MACs. As seen in Figure 9b, HOBFLOPS16 area on the AVX512 platform is 38% smaller than the HOBFLOPS16 area on AVX2. A further slight performance boost is seen for round-towards-zero.

Conclusion
We propose HOBFLOPS, a method of generating fast custom-precision emulated bitslice parallel software FP arithmetic using a hardware design flow, our cell libraries and custom code-generator. We generate efficient software-emulated FP operators with an arbitrary precision mantissa and exponent. HOBFLOPS offers FP with custom range and precision, useful for FP CNN acceleration where memory storage and bandwidth are limited.
We experiment with large numbers of channels and kernels in CNN convolution. When HOBFLOPS16 MAC is implemented in the convolution layer on Arm Neon and Intel AVX2 and AVX512 processors and compared to Berkeley's SoftFP16 MulAdd FP emulation, HOBFLOPS achieves approximately 0.5×, 2.5× and 8× the performance of SoftFP16 respectively. We show e.g., HOBFLOPS9 performs at approximately 2 billion MACs/second on an AVX512, around 5× the performance of HOBFLOPS16 and approximately 45 million MACs/second on Arm Neon processor around 6× that of HOBFLOPS16.