text
stringlengths
111
1.02k
Contents lists available at ScienceDirect Journal of Systems Architecture journal homepage: www.elsevier.com/locate/sysarc CoAxNN: Optimizing on-device deep learning with conditional approximate neural networks Guangli Li a,b,1, Xiu Ma c,d,1, Qiuchu Yu a,b, Lei Liu c,d, Huaxiao Liu c,d, Xueying Wang a,b,โˆ— a State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China b University of Chinese Academy of Sciences, Beijing, China c College of Computer Science and Technology, Jilin University, Changchun, China d MOE Key Laboratory of Symbolic Computation and Knowledge Engineering, Jilin University, Changchun, China A R T I C L E I N F O A B S T R A C T Keywords: On-device deep learning Efficient neural networks Model approximation and optimization While deep neural networks have achieved superior performance in a variety of intelligent applications, the increasing computational complexity makes them difficult to be deployed on resource-constrained devices. To
Efficient neural networks Model approximation and optimization While deep neural networks have achieved superior performance in a variety of intelligent applications, the increasing computational complexity makes them difficult to be deployed on resource-constrained devices. To improve the performance of on-device inference, prior studies have explored various approximate strategies, such as neural network pruning, to optimize models based on different principles. However, when combining these approximate strategies, a large parameter space needs to be explored. Meanwhile, different configuration parameters may interfere with each other, damaging the performance optimization effect. In this paper, we propose a novel model optimization framework, CoAxNN, which effectively combines different approximate strategies, to facilitate on-device deep learning via model approximation. Based on the principles of different
propose a novel model optimization framework, CoAxNN, which effectively combines different approximate strategies, to facilitate on-device deep learning via model approximation. Based on the principles of different approximate optimizations, our approach constructs the design space and automatically finds reasonable configurations through genetic algorithm-based design space exploration. By combining the strengths of different approximation methods, CoAxNN enables efficient conditional inference for models at runtime. We evaluate our approach by leveraging state-of-the-art neural networks on a representative intelligent edge platform, Jetson AGX Orin. The experimental results demonstrate the effectiveness of CoAxNN, which achieves up to 1.53ร— speedup while reducing energy by up to 34.61%, with trivial accuracy loss on CIFAR-10/100 and CINIC-10 datasets. 1. Introduction Convolutional neural networks (CNNs) have achieved remarkable success in various intelligent tasks such as image classification [1].
up to 1.53ร— speedup while reducing energy by up to 34.61%, with trivial accuracy loss on CIFAR-10/100 and CINIC-10 datasets. 1. Introduction Convolutional neural networks (CNNs) have achieved remarkable success in various intelligent tasks such as image classification [1]. To pursue superior performance on complex intelligent tasks, CNNs are becoming wider and deeper, leading to tremendous computational costs and expensive energy consumption for model execution. Recently, on-device deep learning has been a mainstay due to its immeasurable potential for privacy protection and real-time response. However, it is hard to deploy complicated neural network models on edge devices due to the limited resources. Many efforts have been made to enable efficient on-device deep learning via model approximation. For instance, pruning-based strate- gies [2] compress a neural network model by reducing redundant neurons and connections and quantization-based methods [3] improve
to the limited resources. Many efforts have been made to enable efficient on-device deep learning via model approximation. For instance, pruning-based strate- gies [2] compress a neural network model by reducing redundant neurons and connections and quantization-based methods [3] improve the efficiency of model execution by leveraging low-precision compu- tations. In addition to these model compression techniques, emerging staging-based approximate strategies, such as early exiting, improve model performance by conditional execution at runtime. While these methods optimize the deep neural network models from different directions, we found that it is still a challenging problem to effectively combine them (as described in Section 2.4). To achieve efficient on-device inference, it is needed to take full advantage of the superiority of different optimization strategies. Different approximate strategies, based on distinct principles, have their own configuration
to effectively combine them (as described in Section 2.4). To achieve efficient on-device inference, it is needed to take full advantage of the superiority of different optimization strategies. Different approximate strategies, based on distinct principles, have their own configuration parameters. When combining different strategies, the configuration parameters of different strategies may affect each other, influencing the optimization effect of the model, and even leading to poor optimization results. As such, this paper aims to address the following challenging problem: How to design an efficient model optimization framework to make โˆ— Corresponding author at: State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China. E-mail addresses: [email protected] (G. Li), [email protected] (X. Ma), [email protected] (Q. Yu), [email protected] (L. Liu), [email protected] (H. Liu), [email protected] (X. Wang).
E-mail addresses: [email protected] (G. Li), [email protected] (X. Ma), [email protected] (Q. Yu), [email protected] (L. Liu), [email protected] (H. Liu), [email protected] (X. Wang). 1 Guangli Li and Xiu Ma contributed equally to this work. https://doi.org/10.1016/j.sysarc.2023.102978 Received 24 April 2023; Received in revised form 18 July 2023; Accepted 23 August 2023 JournalofSystemsArchitecture143(2023)102978Availableonline25August20231383-7621/ยฉ2023ElsevierB.V.Allrightsreserved. G. Li et al. full use of various model approximate strategies, so as to optimize on-device deep learning while meeting accuracy requirements? In this paper, we present a novel neural network optimization framework, CoAxNN (Conditional Approximate Neural Networks), which effectively combines staging-based and pruning-based approx- imate strategies, for efficient on-device deep learning. The staging- based approximate strategy optimizes the model structure as multiple
framework, CoAxNN (Conditional Approximate Neural Networks), which effectively combines staging-based and pruning-based approx- imate strategies, for efficient on-device deep learning. The staging- based approximate strategy optimizes the model structure as multiple stages with different complexities by attaching multiple exit branches, whereas the pruning-based approximate strategy compresses the model parameters according to the importance of filters. CoAxNN takes ac- count of both their optimization principles and automatically searches for reasonable configuration parameters to construct a compressed multi-stage neural network model, thus taking full advantage of the superiority of different approximate strategies to achieve efficient model optimization. The optimization techniques, including pruning and staging, have been studied individually in the past; the key novelty of our work is to provide an effective and efficient mechanism to combine them, so as to optimize the neural network performance with
model optimization. The optimization techniques, including pruning and staging, have been studied individually in the past; the key novelty of our work is to provide an effective and efficient mechanism to combine them, so as to optimize the neural network performance with a reasonable configuration, for a given task and a platform. The main contributions of this paper are as follows: โ€ข We present a novel neural network optimization framework, namely CoAxNN, which effectively combines staging-based and pruning-based approximate strategies, thereby improving actual performance while meeting accuracy requirements, for efficient on-device model inference. โ€ข According to the principles of staging-based and pruning-based approximate strategies, our framework constructs the design space, and automatically searches for reasonable configuration parameters, including the number of stages, the position of stages, the threshold of stages, and the pruning rate, so as to make
approximate strategies, our framework constructs the design space, and automatically searches for reasonable configuration parameters, including the number of stages, the position of stages, the threshold of stages, and the pruning rate, so as to make full use of the advantages of both to achieve efficient model optimization. โ€ข We validate the effectiveness of CoAxNN by optimizing state-of- the-art deep neural networks on a commercial edge device, Jetson AGX Orin, in terms of prediction accuracy, execution latency, and energy consumption, and experimental results show that CoAxNN can significantly improve the performance of model inference with trivial accuracy loss. The rest of the paper is organized as follows. The background and motivation are introduced in Section 2. The details of our optimization framework are described in Section 3. The experimental evaluation is conducted in Section 4. A discussion is given in Section 5. The conclusion is presented in Section 6. 2. Background and motivation
motivation are introduced in Section 2. The details of our optimization framework are described in Section 3. The experimental evaluation is conducted in Section 4. A discussion is given in Section 5. The conclusion is presented in Section 6. 2. Background and motivation 2.1. Pruning-based approximation Neural network pruning, one of the most representative model com- pression techniques, approximates the original neural network model by reducing redundant neurons or connections making less contribu- tion to model performance. Most previous works on pruning-based approximation can be roughly divided into two categories: unstructured pruning and structured pruning. Prior works on weight pruning [4,5] achieve high non-structured sparsity of pruned models by removing single parameters in a fil- ter. Guo et al. [4] and Hal et al. [5] used magnitude-based pruning methods, which eliminate weights with the smallest magnitude. Guo et al. [4] proposed dynamic network surgery to reduce the network
sparsity of pruned models by removing single parameters in a fil- ter. Guo et al. [4] and Hal et al. [5] used magnitude-based pruning methods, which eliminate weights with the smallest magnitude. Guo et al. [4] proposed dynamic network surgery to reduce the network complexity by making on-the-fly connection pruning. Hal et al. [5] pruned low-weight connections to reduce the storage and computation demands by an order of magnitude. Some pruning research groups utilize first-order or second-order derivatives of the loss function with respect to the weights [6,7]. Hassibi et al. [6] proposed Optimal Brain Damage (OBD), which uses all second-order derivatives of the loss function to prune single non-essential weights. Optimal Brain Surgeon (OBS) [7] have optimized the OBD method, which considers the condi- tion that the Hessian matrix is non-diagonal. These approaches show attractive theoretical performance improvement but are difficult to be supported by existing software and hardware. Unstructured sparse
(OBS) [7] have optimized the OBD method, which considers the condi- tion that the Hessian matrix is non-diagonal. These approaches show attractive theoretical performance improvement but are difficult to be supported by existing software and hardware. Unstructured sparse models require specific matrix multiplication calculations and stor- age formats, which can hardly leverage existing high-efficiency BLAS libraries. Unlike the early efforts on unstructured pruning that may cause irregular calculation patterns, structured pruning reduces redundant computations on unimportant filters or channels to produce a struc- tured sparse model. The corresponding feature maps can be deleted as the filters are pruned. Therefore, much recent work has focused on filter pruning methods. SFP [2] and ASFP [8] dynamically pruned the filters in a soft manner, which zeroizes the unimportant filters and keeps updating them in the training stage. Li et al. [9] presented a fusion-
the filters are pruned. Therefore, much recent work has focused on filter pruning methods. SFP [2] and ASFP [8] dynamically pruned the filters in a soft manner, which zeroizes the unimportant filters and keeps updating them in the training stage. Li et al. [9] presented a fusion- catalyzed filter pruning approach, which simultaneously optimizes the parametric and non-parametric operators. Luo et al. [10] pruned filters based on statistics information computed from its next layer. The filters of different layers may have different influences on model prediction. Li et al. [11] proposed a flexible-rate filter pruning approach, FlexPruner, which automatically selects the number of filters to be pruned for each layer. Plochaet et al. [12] introduced a hardware-aware pruning method with the goal of decreasing the inference time for FPGA deep learning accelerators, adaptively pruning the neural network based on the size of the systolic array used to calculate the convolutions. To
each layer. Plochaet et al. [12] introduced a hardware-aware pruning method with the goal of decreasing the inference time for FPGA deep learning accelerators, adaptively pruning the neural network based on the size of the systolic array used to calculate the convolutions. To preserve the robustness at a high sparsity ratio in structured pruning, Zhuang et al. [13] proposed an effective filter importance criterion to evaluate the importance of filters by estimating their contribution to the adversarial training loss. Besides, some researchers have found the value of network pruning in discovering the network architecture [14,15]. Liu et al. [14] demonstrated that in some cases pruning can be useful as an architecture search paradigm. Li et al. [15] proposed a random architecture search to find a good architecture given a pre-defined model by channel pruning. Li et al. [16] proposed an end- to-end channel pruning method to search out the desired sub-network
be useful as an architecture search paradigm. Li et al. [15] proposed a random architecture search to find a good architecture given a pre-defined model by channel pruning. Li et al. [16] proposed an end- to-end channel pruning method to search out the desired sub-network automatically and efficiently, which learns per-layer sparsity through depth-wise binary convolution. Ding et al. [17] presented a neural architecture search with pruning method, which derives the most po- tent model by removing trivial and redundant edges from the whole neural network topology. The structured sparse model can be perfectly supported by existing libraries to achieve a realistic acceleration. In this paper, we adopt filter pruning to realize practical performance improvement for neural network models. 2.2. Staging-based approximation Prior studies [18,19] found that the difficulty of classifying an image in real-world scenarios is diverse. The easy samples can be classified
this paper, we adopt filter pruning to realize practical performance improvement for neural network models. 2.2. Staging-based approximation Prior studies [18,19] found that the difficulty of classifying an image in real-world scenarios is diverse. The easy samples can be classified with low effort, and difficult samples consume more computation ef- forts for prediction. Staging-based approximate strategies, such as early exiting [18] and layer skipping [20], emerge as a prominent important technique for separating the classification of easy and hard inputs. The original neural network uses a fixed computation process for the prediction of all samples. Staging-based approximate strategies perform adaptive computing for samples according to conditions at run-time. Teerapittayanon et al. [18] demonstrated that the deep neural network with additional side branch classifiers can both improve accuracy and significantly reduce the inference time of the network. Panda et al. [19]
adaptive computing for samples according to conditions at run-time. Teerapittayanon et al. [18] demonstrated that the deep neural network with additional side branch classifiers can both improve accuracy and significantly reduce the inference time of the network. Panda et al. [19] proposed Conditional Deep Learning cascading a linear network for each convolutional layer and monitoring the output of the linear net- work to decide whether classification can be terminated at the current stage or not. Fang et al. [21] presented an input-adaptive framework for video analytics, which adopts an architecture search-based scheme to find the optimal architecture for each early exit branch. Wang JournalofSystemsArchitecture143(2023)1029782 G. Li et al. et al. [22] designed dynamic layer-skipping mechanisms, which sup- press unnecessary costs for easy samples and halt inference for all samples to meet resource constraints for the inference of more compli-
JournalofSystemsArchitecture143(2023)1029782 G. Li et al. et al. [22] designed dynamic layer-skipping mechanisms, which sup- press unnecessary costs for easy samples and halt inference for all samples to meet resource constraints for the inference of more compli- cated CNN backbones. Figurnov et al. [23] studied early termination in each residual unit of ResNets. Farhadi et al. [23] implemented an early- exiting method on the FPGA platform using partial reconfiguration to reduce the amount of needed computation. Jayakodi et al. [24] used Bayesian Optimization to configure the early exit neural networks to trade off accuracy and energy. To reduce unnecessary intermediate calculations in the inference process of Branchynet, Liang et al. [25] directly determined the exit position of the sample in the multi-branch network according to the difficulty of the sample without intermediate trial errors. Jo et al. [26] proposed a low-cost early exit network, which
calculations in the inference process of Branchynet, Liang et al. [25] directly determined the exit position of the sample in the multi-branch network according to the difficulty of the sample without intermediate trial errors. Jo et al. [26] proposed a low-cost early exit network, which significantly improves energy efficiencies by reducing the parameters used in inference with efficient branch structures. In this paper, we achieve a multi-stage approximate model by early exiting to accelerate model inference for input samples in real-world scenarios. 2.3. Design space exploration Design space exploration (DSE) is a systematic analysis method, which searches for the optimal solutions in a large design space accord- ing to the requirements. For example, in the staging-based approximate strategy, deciding whether or not an exit branch should be inserted at some position in the middle of the neural network model, and how the thresholds for each exit point should be set can be seen as a DSE
ing to the requirements. For example, in the staging-based approximate strategy, deciding whether or not an exit branch should be inserted at some position in the middle of the neural network model, and how the thresholds for each exit point should be set can be seen as a DSE problem. Panda et al. [19] and Teerapittayanon et al. [18] empirically set the location and threshold for each exit in the conditional neural network model. Jayakodi et al. [24] found the best thresholds via Bayesian Optimization for the specified trade-off between accuracy and energy consumption of inference. Park et al. [27] systematically determined the locations and thresholds of exit branches by genetic algorithm. Park et al. [28] integrated the once-for-all technique and BPNet, which consider architectures of base network and exit branches simultaneously in the same search process. Besides, the fine-grained fil- ter pruning, that is assign reasonable pruning rates for different layers,
algorithm. Park et al. [28] integrated the once-for-all technique and BPNet, which consider architectures of base network and exit branches simultaneously in the same search process. Besides, the fine-grained fil- ter pruning, that is assign reasonable pruning rates for different layers, can also be considered as a classic DSE problem. Li et al. [11] proposed a flexible-rate filter pruning method, which selects the filters to be pruned with a greedy-based strategy. He et al. [29] sampled design space using reinforcement learning, which performs customizing prun- ing for each layer, thus improving model compression. Qian et al. [30] proposed a hierarchical threshold pruning method, which considers the filter importance within relatively redundant layers instead of all layers, achieving layerwise pruning for a better network structure. In this paper, we regard the configuration parameters of staging-based and pruning-based approximate strategies as the whole design space
the filter importance within relatively redundant layers instead of all layers, achieving layerwise pruning for a better network structure. In this paper, we regard the configuration parameters of staging-based and pruning-based approximate strategies as the whole design space and employ a genetic algorithm(GA)-based DSE to automatically find the (near-)optimal configuration to effectively combine them, achieving efficient on-device inference. In the future, we will consider setting reasonable pruning rates for different layers. 2.4. Motivation The pruning-based approximate strategy focuses on compressing the model, which reduces the computation costs by deleting unimportant parameters in the model, so how to set the pruning rate needs to be considered. The staging-based approximate strategy concentrates on improving the execution speed of the model, which allows the inference of most simple samples to terminate with a good prediction in the
parameters in the model, so how to set the pruning rate needs to be considered. The staging-based approximate strategy concentrates on improving the execution speed of the model, which allows the inference of most simple samples to terminate with a good prediction in the earlier stage by attaching multiple exits in the original model. How to place the exits and how to set a threshold for each exit should be considered for the design of a staging-based approximate strategy. Combining different approximate strategies will involve more configu- ration parameters and the approximate strategies may affect each other, which potentially influences the effect of the model optimization. Fig. 1. The optimization effect for ResNet-56 using different configuration parameters under the specified accuracy requirement. Fig. 1 shows the optimization effect of the ResNet-56 using different configuration parameters under the specified requirements of accuracy
Fig. 1. The optimization effect for ResNet-56 using different configuration parameters under the specified accuracy requirement. Fig. 1 shows the optimization effect of the ResNet-56 using different configuration parameters under the specified requirements of accuracy on the CIFAR-10 dataset, where the triples (๐‘ฅ, ๐‘ฆ, ๐‘ง) represent the number of stages, stage threshold, and pruning rate, respectively. Fig. 1(a), (b), and (c) show the computational costs (normalized to the computational cost of the baseline model) of various optimization configurations when the accuracy is 98.1%, 98.7%, and 98.8% (normalized to the accuracy of the baseline model). In practice, a certain error can be allowed in model accuracy (ยฑ0.001), for example, 98.09%, and 98.12% both meet the requirement of 98.1%. The relationship between the number of stages and the computational cost is not regular, which will be affected by the stage threshold and pruning rate, for example,
allowed in model accuracy (ยฑ0.001), for example, 98.09%, and 98.12% both meet the requirement of 98.1%. The relationship between the number of stages and the computational cost is not regular, which will be affected by the stage threshold and pruning rate, for example, in Fig. 1(c), the computational cost of (3,0.08,0) with more stages is larger than (2,0.1,0.1), the computational cost of (2,0.1,0.1) with fewer stages is larger than (3,0.2,0.1). Besides, affected by the staging-based optimization, the computational costs of the optimized model at a high pruning rate may be larger than that at low pruning rates, for example, in Fig. 1(b), the configuration of (2,0.08,0.3) with a pruning rate of 0.3 has more computational costs than (3,0.2,0.2) with a pruning rate of 0.2. In Fig. 1(a), we can observe from the partial experimental results that at the accuracy requirement of 98.1%, the computation of the optimized models using three stages is less than that of the model using
has more computational costs than (3,0.2,0.2) with a pruning rate of 0.2. In Fig. 1(a), we can observe from the partial experimental results that at the accuracy requirement of 98.1%, the computation of the optimized models using three stages is less than that of the model using two stages. But this law does not apply to the optimization effect of other accuracy requirements such as 98.7% and 98.8%. It is observed that the optimization effects of different configuration parameters are distinct and irregular under the specified accuracy requirement, and thus it is difficult to find an optimal model. This example shows that JournalofSystemsArchitecture143(2023)1029783 G. Li et al. Fig. 2. The optimization effect of staging-based strategy, pruning-based strategy, and CoAxNN for ResNet-56 on the CIFAR-10. it is challenging to combine different approximate strategies to achieve efficient optimization for neural network models. In this paper, for a specified accuracy requirement, we focus on
CoAxNN for ResNet-56 on the CIFAR-10. it is challenging to combine different approximate strategies to achieve efficient optimization for neural network models. In this paper, for a specified accuracy requirement, we focus on combining the principles of different approximate strategies to con- struct a design space and automatically search for reasonable con- figuration parameters, giving full play to the advantages of different approximate strategies to achieve efficient optimization for neural net- work models. As shown in Fig. 2, at the accuracy requirement of 99.6%, for the staging-based optimization strategy, the two stages are used and the threshold is set to 0.07 for each stage, the normalized computational cost is 0.89. For the pruning-based optimization, the pruning rate is set to 0.1, and the normalized computational cost is 0.89. CoAxNN effectively combines pruning-based and staging-based strategies, whose computational cost is 0.64, greatly improving the computational performance.
computational cost is 0.89. For the pruning-based optimization, the pruning rate is set to 0.1, and the normalized computational cost is 0.89. CoAxNN effectively combines pruning-based and staging-based strategies, whose computational cost is 0.64, greatly improving the computational performance. 3. Methodology 3.1. Overview In this paper, we propose an efficient optimization framework for neural network models, CoAxNN, which automatically searches for reasonable configuration parameters through a GA-based DSE. CoAxNN effectively combines staging-based with pruning-based approximate strategies to make full use of the superiority of both, thereby improving the actual performance while meeting the accuracy requirements for neural network models. The overview of the CoAxNN is shown in Fig. 3. First, for the original deep neural network model, CoAxNN performs staging-based and pruning-based approximate strategies according to the genes of the chromosome for each individual, which generates a compressed
neural network models. The overview of the CoAxNN is shown in Fig. 3. First, for the original deep neural network model, CoAxNN performs staging-based and pruning-based approximate strategies according to the genes of the chromosome for each individual, which generates a compressed multi-stage model. According to the availability of stages in the gene, CoAxNN attaches exit branches to the original model to build a multi- stage conditional activation model. According to the threshold of each stage, CoAxNN predicts input samples, having distinct difficulties, by multiple stages with different computational complexities, with the entropy-aware activation manner. The obtained multi-stage model is compressed by removing unimportant filters, thereby further reducing computational costs. Next, CoAxNN evaluates the fitness of the cor- responding individual according to the accuracy and latency of the compressed multi-stage model and sorts the individuals according to
compressed by removing unimportant filters, thereby further reducing computational costs. Next, CoAxNN evaluates the fitness of the cor- responding individual according to the accuracy and latency of the compressed multi-stage model and sorts the individuals according to their fitness. Then, the chromosome pool is updated, generating the next generation of individuals. After the evolution of multiple gener- ations, which repeat the above steps, we can obtain several individuals that have optimal performance. 3.2. Staging-based approximate optimization In general, executing a neural network model is a one-staged ap- proach, which processes all the inputs in the same manner, i.e., starting from the input operator and performing it operator by operator until the final exit operator. Prior studies [19] found that classification difficulty varies widely across inputs in real-world scenarios. Different computational complexities need to be considered when predicting in-
from the input operator and performing it operator by operator until the final exit operator. Prior studies [19] found that classification difficulty varies widely across inputs in real-world scenarios. Different computational complexities need to be considered when predicting in- puts. Most of the input samples can be correctly classified by employing a part of a neural network, without the computation effort of the entire neural network. Early exiting strategy often comes into play, which allows simple inputs to exit early with a good prediction by the addition of multiple exit points. By leveraging the early exiting strategy, CoAxNN achieves a staging-based approximation to give an early exit- ing opportunity for simple inputs. We denote a neural network model as ๎ˆบ = {๐‘“1, ๐‘“2, โ€ฆ , ๐‘“๐‘š} which consists of ๐‘š operators. In CoAxNN, a multi-stage model, ๎ˆบ โˆ—, can be formalized as follows: ๎ˆบ โˆ— = ๐œ โ‹ƒ ๐‘–=0 ๎ˆฟ ๐‘– (1) where ๐œ is the number of stages. The ๎ˆฟ
ing opportunity for simple inputs. We denote a neural network model as ๎ˆบ = {๐‘“1, ๐‘“2, โ€ฆ , ๐‘“๐‘š} which consists of ๐‘š operators. In CoAxNN, a multi-stage model, ๎ˆบ โˆ—, can be formalized as follows: ๎ˆบ โˆ— = ๐œ โ‹ƒ ๐‘–=0 ๎ˆฟ ๐‘– (1) where ๐œ is the number of stages. The ๎ˆฟ ๐‘– is an approximate model with the staging-based strategy, which can be formalized as: ๎ˆฟ ๐‘– = {๎ˆน ๐‘– + ๎ˆฏ ๐‘– + ๎ˆฎ ๎ˆบ , ๐‘– = ๐œ ๐‘–, 1 โฉฝ ๐‘– < ๐œ (2) ๐‘– = {๐‘“1, ๐‘“2, โ€ฆ , ๐‘“๐‘๐‘– where ๎ˆน } represents a part of the original neural network with ๐‘๐‘– operators, ๎ˆฎ , โ€ฆ , ๐‘“ โˆ— , ๐‘“ โˆ— } represents an addi- ๐‘๐‘– 2 tional exit branch with ๐‘๐‘– operators, and ๎ˆฏ ๐‘– = {๐‘๐‘–, ๐œ€๐‘–} represents an exit checker, containing a threshold ๐œ€๐‘– and a conditional activation operator ๐œ = ๎ˆบ denotes the original (main) ๐‘๐‘– using threshold ๐œ€๐‘–. Especially, ๎ˆฟ neural network model. ๐‘– = {๐‘“ โˆ— 1 It is non-trivial to design a staging-based approximate strategy for adaptive conditional inference of a multi-stage model, and the following factors need to be considered: โ€ข Number of ๎‰ˆ
๐œ = ๎ˆบ denotes the original (main) ๐‘๐‘– using threshold ๐œ€๐‘–. Especially, ๎ˆฟ neural network model. ๐‘– = {๐‘“ โˆ— 1 It is non-trivial to design a staging-based approximate strategy for adaptive conditional inference of a multi-stage model, and the following factors need to be considered: โ€ข Number of ๎‰ˆ ๐’Š. A multi-stage model with arbitrary exits can be built by stage availability. However, fewer exits cannot cover the diverse difficulty of classification of input samples, whereas too many exits increase the latency of hard samples that do not exit early. โ€ข Selection of Attached Position (๐’‘๐’Š) for ๎‰ˆ ๐’Š. The exits at a more for- mer position cannot provide satisfactory accuracy, while redundant computation may be involved at a more latter exit. Besides, attached exit branches may also interfere with a variety of computational graph optimization methods, such as operator fusion and mem- ory reuse, provided by the deep learning frameworks, increasing operation counts, data movement, and other system overheads.
exit branches may also interfere with a variety of computational graph optimization methods, such as operator fusion and mem- ory reuse, provided by the deep learning frameworks, increasing operation counts, data movement, and other system overheads. โ€ข Confidence Threshold (๐œบ๐’Š) of ๎‰ˆ ๐’Š. The confidence threshold is used to determine whether the prediction result of stage ๎ˆฟ ๐‘– has sufficient confidence. With a higher threshold, complex samples may finish predictions from the previous exits with lower accuracy, and using a lower threshold, simple samples may use more complex compu- tations to complete inference, due to cannot end from the previous classifiers, incurring additional computational overheads. โ€ข Structure Design for ๎‰ˆ ๐’Š. The structure of each exit branch (๎ˆฎ ๐‘–) is not identical. Each ๎ˆฎ , ๐‘“ โˆ— ๐‘– consists of several operators ({๐‘“ โˆ— , โ€ฆ , 2 1 ๐‘“ โˆ— ๐‘๐‘–โˆ’1}) used for feature extraction and a linear classifier ๐‘“ โˆ— . Feature ๐‘๐‘– extraction operators receive the intermediate feature map from ๐‘“๐‘๐‘–
โ€ข Structure Design for ๎‰ˆ ๐’Š. The structure of each exit branch (๎ˆฎ ๐‘–) is not identical. Each ๎ˆฎ , ๐‘“ โˆ— ๐‘– consists of several operators ({๐‘“ โˆ— , โ€ฆ , 2 1 ๐‘“ โˆ— ๐‘๐‘–โˆ’1}) used for feature extraction and a linear classifier ๐‘“ โˆ— . Feature ๐‘๐‘– extraction operators receive the intermediate feature map from ๐‘“๐‘๐‘– and extract more high-level features in the form required by a subsequent linear classifier. The configuration and complexity of the intermediate feature maps for different depths of the main neural networks are varying, making the design of ๎ˆฎ ๐‘– arduous. The ๐‘“ โˆ— ๐‘๐‘– operator is used to produce classification results based on the output of ๐‘“ โˆ— is different at each ๎ˆฎ ๐‘–. ๐‘๐‘–โˆ’1, and the number of input feature maps for ๐‘“ โˆ— ๐‘๐‘– To effectively utilize the early-exiting method to build an approxi- mate multi-stage model, our approach carefully designs principles for each module. โ€ข Setting of Number (๐œ) and Attached Position (๐’‘๐’Š) of ๎‰ˆ ๐’Š. The number (๐œ) and the position (๐’‘๐’Š) of the exit branches (๎‰ˆ ๐’Š) are two
๐‘๐‘– To effectively utilize the early-exiting method to build an approxi- mate multi-stage model, our approach carefully designs principles for each module. โ€ข Setting of Number (๐œ) and Attached Position (๐’‘๐’Š) of ๎‰ˆ ๐’Š. The number (๐œ) and the position (๐’‘๐’Š) of the exit branches (๎‰ˆ ๐’Š) are two factors that will affect each other. Some unnecessary exits may be inserted, having little improvement in accuracy but leading to non-negligible computational overheads, when the number of exit branches is large. When the position of the exit branch is not JournalofSystemsArchitecture143(2023)1029784 G. Li et al. Fig. 3. Overview of CoAxNN. โ€ข Structure Design for ๎‰ˆ reasonable, and cannot distinguish the difficulty of the sample, it is difficult to increase the number of exit branches to reduce the computational cost while meeting the model accuracy requirement. To address this problem, CoAxNN puts the availability of each stage into the design space of the GA, and each available stage corresponds
is difficult to increase the number of exit branches to reduce the computational cost while meeting the model accuracy requirement. To address this problem, CoAxNN puts the availability of each stage into the design space of the GA, and each available stage corresponds to a new exit branch. The availability of the stage can control the number and position of exit branches at the same time. Besides, to introduce fewer new model structures and preserve the existing graph optimizations, CoAxNN chooses to attach exit branches ๎ˆฎ ๐‘– at the end of the group of building blocks. It is noted that CoAxNN does not attach the exit branch after the last group of building blocks, as there is already an existing original exit for the original backbone. ๐’Š. We introduce a feature extractor and a linear classifier for each exit branch ๎ˆฎ ๐‘–. The structure of the feature extractor is designed with the building block as granularity. This design not only retains the original neural network structure
๐’Š. We introduce a feature extractor and a linear classifier for each exit branch ๎ˆฎ ๐‘–. The structure of the feature extractor is designed with the building block as granularity. This design not only retains the original neural network structure but also provides more opportunities for system-level optimizations. Generally, operators for feature extraction also contain non-linear activation operators such as rectified linear units, and normalization operators such as batch normalization. Besides, prior studies [24] revealed that the output feature maps of operators at shallow depths of a neural network have a relatively large height and width, which results in a large number of input feature maps being passed to the linear classifier of former exits, thus leading to a long latency for easy samples that exit early. As such, in CoAxNN, we add an extra pooling operator after the last feature extraction operator of shallow ๎ˆฎ ๐‘–. โ€ข Confidence Measure in ๎‰‰ ๐’Š. The ๎ˆฏ ๐‘–. A reliable ๎ˆฏ
linear classifier of former exits, thus leading to a long latency for easy samples that exit early. As such, in CoAxNN, we add an extra pooling operator after the last feature extraction operator of shallow ๎ˆฎ ๐‘–. โ€ข Confidence Measure in ๎‰‰ ๐’Š. The ๎ˆฏ ๐‘–. A reliable ๎ˆฏ ๐‘– takes a threshold checking step, which determines whether an input returns from the current exit or continues to the next exit according to the prediction result of ๎ˆฟ ๐‘– should have the ability to identify whether the classification results are sufficiently confident. There are var- ious methods [31], including maximum probability, entropy, and margin, for the design of ๎ˆฏ ๐‘–. Prior work [24] has demonstrated the performance of the aforementioned three confidence types is almost identical. CoAxNN chooses to use the entropy of predicted probability as the entropy-aware activation operator (๐‘๐‘–) to evaluate the confidence of the prediction result for the input sample (๐‘ฅ) of the ๐‘–th stage classifier, as follows: entropy( ฬ‚๐‘ฆ๐‘–) = ๐ถ โˆ‘ ๐‘=1
almost identical. CoAxNN chooses to use the entropy of predicted probability as the entropy-aware activation operator (๐‘๐‘–) to evaluate the confidence of the prediction result for the input sample (๐‘ฅ) of the ๐‘–th stage classifier, as follows: entropy( ฬ‚๐‘ฆ๐‘–) = ๐ถ โˆ‘ ๐‘=1 ฬ‚๐‘ฆ๐‘–(๐‘) log ฬ‚๐‘ฆ๐‘–(๐‘) (3) where ฬ‚๐‘ฆ๐‘– is the probability distribution of the output of the linear classifier ๐‘“ โˆ— on different classification labels, calculated by the soft- ๐‘๐‘– max operator, and ๐ถ is the number of classes. An entropy threshold ๐œ€๐‘– is used to decide whether an input returns the prediction of the current exit or activates the latter operators. A higher confidence value implies that the input sample that arrived at the current exit is hard and needs to be processed by a more complex stage to complete accurate classification. 3.3. Pruning-based approximate optimization In addition to the staging-based approximate strategy that pro- vides adaptive computing based on conditional activation at runtime,
hard and needs to be processed by a more complex stage to complete accurate classification. 3.3. Pruning-based approximate optimization In addition to the staging-based approximate strategy that pro- vides adaptive computing based on conditional activation at runtime, CoAxNN also integrates a pruning-based approximate strategy to com- press model size. The neural network pruning technique has been widely studied by researchers, which can be broadly categorized as structured and unstructured pruning. Structured pruning such as filter pruning has higher computational efficiency than unstructured prun- ing [32]. Especially, filter pruning is employed, which not only deletes redundant computations of unimportant filters but also leads to the removal of corresponding feature maps, providing realistic performance improvements. In CoAxNN, we utilize the filter pruning method to compress the multi-stage model and quantify the importance of each filter in a convolutional operator based on the ๐“2-norm: ๎‰ƒ โ€– โ€– โ€–2 =
removal of corresponding feature maps, providing realistic performance improvements. In CoAxNN, we utilize the filter pruning method to compress the multi-stage model and quantify the importance of each filter in a convolutional operator based on the ๐“2-norm: ๎‰ƒ โ€– โ€– โ€–2 = ๐‘Ÿโ€– โˆš โˆš โˆš โˆš โˆš ๐‘˜ โˆ‘ ๐‘š โˆ‘ ๐‘› โˆ‘ ๐‘ก=1 ๐‘–=1 ๐‘—=1 ๐‘ค2 ๐‘ก,๐‘–,๐‘— (4) where ๎‰ƒ ๐‘Ÿ indicates the ๐‘Ÿth filter in a convolutional operator, ๐‘ค๐‘ก,๐‘–,๐‘— denotes an element of ๎‰ƒ ๐‘Ÿ that resides in the ๐‘–th row and ๐‘—th column in the ๐‘กth channel, ๐‘˜ denotes the input channels, ๐‘š denotes the height of filters, and ๐‘› denotes the width of filters. The filters with smaller ๐“2- norm will be given higher priority to be pruned than those of higher ๐“2-norm. To keep the model capacity and minimize the loss of accuracy as much as possible, we utilize a dynamic pruning scheme [2] for staging-based approximate CNNs, which zeroes the pruned filters and keeps updating them in the re-training process. 3.4. Training of CoAxNN
๐“2-norm. To keep the model capacity and minimize the loss of accuracy as much as possible, we utilize a dynamic pruning scheme [2] for staging-based approximate CNNs, which zeroes the pruned filters and keeps updating them in the re-training process. 3.4. Training of CoAxNN Joint training trains all classifiers in a neural network model at the same time, which is widely used in the training process of neural network models with exit branches [18,27]. It defines a loss function for each classifier and minimizes the weighted sum of loss functions for all classifiers during training. Therefore, each classifier provides regularization for others to alleviate the overfitting of the model. CoAxNN utilizes joint training optimization to train the backbone neural network and exit branches at the same time and minimize the weighted sum of the cross-entropy loss functions of all stages, denoted as follows: ๎ˆธ joint = ๐œ โˆ‘ ๐‘–=1 ๐œ†๐‘– CE(๐‘ฆ, ฬ„๐‘ฆ๐‘–) ๎ˆธ (5)
CoAxNN utilizes joint training optimization to train the backbone neural network and exit branches at the same time and minimize the weighted sum of the cross-entropy loss functions of all stages, denoted as follows: ๎ˆธ joint = ๐œ โˆ‘ ๐‘–=1 ๐œ†๐‘– CE(๐‘ฆ, ฬ„๐‘ฆ๐‘–) ๎ˆธ (5) where ๐œ†๐‘– represents the weight of the loss function of the ๐‘–th stage, ๐‘ฆ is the real classification of ๐‘ฅ which is shared by all stages, ฬ„๐‘ฆ๐‘– is the output of linear classifier ๐‘“ โˆ— of the ๐‘–th stage, and the cross-entropy loss ๐‘๐‘– function ๎ˆธ CE is calculated as follows: CE(๐‘ฆ, ฬ„๐‘ฆ๐‘–) = โˆ’ ๎ˆธ ๐ถ โˆ‘ ๐‘=1 ๐‘ฆ(๐‘) log e ฬ„๐‘ฆ๐‘–(๐‘) ๐‘—=1 e ฬ„๐‘ฆ๐‘–(๐‘—) โˆ‘๐ถ (6) JournalofSystemsArchitecture143(2023)1029785 G. Li et al. The training process of CoAxNN is summarized in Algorithm 1. It is given training dataset ๎‰„, training epochs ๐‘’๐‘๐‘œ๐‘โ„Ž๐‘š๐‘Ž๐‘ฅ, batch size ๐œŒ, original deep neural network model ๎ˆบ , number of stages ๐œ, the weights for loss functions of all stages ๐œ†, and the chromosome pool ๐‘ƒ . First, based on the genes on the chromosome, CoAxNN performs a
It is given training dataset ๎‰„, training epochs ๐‘’๐‘๐‘œ๐‘โ„Ž๐‘š๐‘Ž๐‘ฅ, batch size ๐œŒ, original deep neural network model ๎ˆบ , number of stages ๐œ, the weights for loss functions of all stages ๐œ†, and the chromosome pool ๐‘ƒ . First, based on the genes on the chromosome, CoAxNN performs a staging-based optimization strategy, which approximates the original neural network model as a multi-stage conditional activation model by attaching exit branches (Lines 1โ€“11). Then, the generated multi- stage model is initialized randomly (Lines 12โ€“13). Next, the model is compressed and tuned according to the training data ๎‰„ and the gene of the pruning rate (๐‘ƒ [๐‘].๐‘๐‘Ÿ๐‘ข๐‘›๐‘–๐‘›๐‘”_๐‘Ÿ๐‘Ž๐‘ก๐‘’) on the chromosome in ๐‘’๐‘๐‘œ๐‘โ„Ž๐‘š๐‘Ž๐‘ฅ epochs (Lines 14โ€“34). In each epoch, CoAxNN calculates the loss function according to Eq. (5) and updates the weights by the traditional backpropagation algorithm (Lines 15โ€“26). Besides, for each convolutional operator in the approximate multi-stage model, CoAxNN obtains the number of filters (๐‘ก) and calculates the ๐“2-norm of each
loss function according to Eq. (5) and updates the weights by the traditional backpropagation algorithm (Lines 15โ€“26). Besides, for each convolutional operator in the approximate multi-stage model, CoAxNN obtains the number of filters (๐‘ก) and calculates the ๐“2-norm of each filter according to Eq. (4), then the dynamic pruning scheme is used to prune โŒŠ๐‘ก ร— ๐‘ƒ [๐‘].๐‘๐‘Ÿ๐‘ข๐‘›๐‘–๐‘›๐‘”_๐‘Ÿ๐‘Ž๐‘ก๐‘’โŒ‹ filters with low ๐“2-norm (Lines 27โ€“33). The pruned filters can be updated once it is found to be important at any time, thus maintaining the learning ability of the model. In the pruning process of each epoch, CoAxNN will reorder the importance of filters for each convolutional operator, and select the filters to be pruned. Finally, the trained model ๎ˆบ โ€ฒ is obtained (Line 36). Algorithm 1: CoAxNN training Input: training data: ๎‰„, training epoch: ๐‘’๐‘๐‘œ๐‘โ„Ž๐‘š๐‘Ž๐‘ฅ, batch size: ๐œŒ, original model backbone: ๎ˆบ , the number of stages: ๐œ, the weights for loss functions: ๐œ†, chromosomes: ๐‘ƒ Output: trained models: ๎ˆบ โ€ฒ 1 for ๐‘ = 1 โ†’ ๐‘ƒ .๐‘ ๐‘–๐‘ง๐‘’() do
Algorithm 1: CoAxNN training Input: training data: ๎‰„, training epoch: ๐‘’๐‘๐‘œ๐‘โ„Ž๐‘š๐‘Ž๐‘ฅ, batch size: ๐œŒ, original model backbone: ๎ˆบ , the number of stages: ๐œ, the weights for loss functions: ๐œ†, chromosomes: ๐‘ƒ Output: trained models: ๎ˆบ โ€ฒ 1 for ๐‘ = 1 โ†’ ๐‘ƒ .๐‘ ๐‘–๐‘ง๐‘’() do // Generate model structure ๎ˆบ โ€ฒ[๐‘] = ๎ˆบ ; for ๐‘– = 1 โ†’ ๐œ โˆ’ 1 do if P[p][i] is available then ๐‘– from ๎ˆบ ; Construct ๎ˆน ๐‘– according to ๎ˆน Construct ๎ˆฎ Construct ๎ˆฏ ๐‘– according to ๐‘ƒ [๐‘][๐‘–].๐‘กโ„Ž๐‘Ÿ๐‘’๐‘ โ„Ž๐‘œ๐‘™๐‘‘; ๎ˆฟ ๐‘– + ๎ˆฏ ๐‘– + ๎ˆฎ ๐‘– = ๎ˆน ๐‘–; ๎ˆบ โ€ฒ[๐‘] = ๎ˆบ โ€ฒ[๐‘] โˆช ๎ˆฟ ๐‘–; ๐‘–; end end // Tune and prune model parameters ๎ˆบ โ€ฒ[๐‘] = LoadModel(๎ˆบ โ€ฒ[๐‘], ๐‘–๐‘›๐‘–๐‘ก๐‘Ž๐‘™_๐‘ค๐‘’๐‘–๐‘”โ„Ž๐‘ก๐‘ ); ๐‘ก๐‘Ÿ๐‘Ž๐‘–๐‘›_๐‘๐‘Ž๐‘ก๐‘โ„Ž๐‘’๐‘  = make_batch(๎‰„, ๐œŒ); for ๐‘’๐‘๐‘œ๐‘โ„Ž = 1 โ†’ ๐‘’๐‘๐‘œ๐‘โ„Ž๐‘š๐‘Ž๐‘ฅ do foreach (๐‘–๐‘›๐‘๐‘ข๐‘ก, ๐‘ก๐‘Ž๐‘Ÿ๐‘”๐‘’๐‘ก) โˆˆ ๐‘ก๐‘Ÿ๐‘Ž๐‘–๐‘›_๐‘๐‘Ž๐‘ก๐‘โ„Ž๐‘’๐‘  do ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก = ๎ˆบ โ€ฒ[๐‘].forward(๐‘–๐‘›๐‘๐‘ข๐‘ก); ๐‘ค๐‘’๐‘–๐‘”โ„Ž๐‘ก๐‘’๐‘‘_๐‘™๐‘œ๐‘ ๐‘  โ† 0; for ๐‘– = 1 โ†’ ๐œ do if P[p][i] is available then ๐‘™๐‘œ๐‘ ๐‘  = CrossEntropy(๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก[๐‘–], ๐‘ก๐‘Ž๐‘Ÿ๐‘”๐‘’๐‘ก); ๐‘ค๐‘’๐‘–๐‘”โ„Ž๐‘ก๐‘’๐‘‘_๐‘™๐‘œ๐‘ ๐‘  += ๐œ†[๐‘–] ร— ๐‘™๐‘œ๐‘ ๐‘ ; end end ๐‘ค๐‘’๐‘–๐‘”โ„Ž๐‘ก๐‘’๐‘‘_๐‘™๐‘œ๐‘ ๐‘  = ๐‘ค๐‘’๐‘–๐‘”โ„Ž๐‘ก๐‘’๐‘‘_๐‘™๐‘œ๐‘ ๐‘ โˆ•๐‘ ๐‘ข๐‘š(๐œ†); ๎ˆบ โ€ฒ[๐‘].backward(๐‘ค๐‘’๐‘–๐‘”โ„Ž๐‘ก๐‘’๐‘‘_๐‘™๐‘œ๐‘ ๐‘ ); end foreach ๐‘“ โˆˆ ๎ˆบ โ€ฒ[๐‘] do if ๐‘“ .๐‘ก๐‘ฆ๐‘๐‘’ == ๐ถ๐‘‚๐‘๐‘‰ then
๐‘ค๐‘’๐‘–๐‘”โ„Ž๐‘ก๐‘’๐‘‘_๐‘™๐‘œ๐‘ ๐‘  โ† 0; for ๐‘– = 1 โ†’ ๐œ do if P[p][i] is available then ๐‘™๐‘œ๐‘ ๐‘  = CrossEntropy(๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก[๐‘–], ๐‘ก๐‘Ž๐‘Ÿ๐‘”๐‘’๐‘ก); ๐‘ค๐‘’๐‘–๐‘”โ„Ž๐‘ก๐‘’๐‘‘_๐‘™๐‘œ๐‘ ๐‘  += ๐œ†[๐‘–] ร— ๐‘™๐‘œ๐‘ ๐‘ ; end end ๐‘ค๐‘’๐‘–๐‘”โ„Ž๐‘ก๐‘’๐‘‘_๐‘™๐‘œ๐‘ ๐‘  = ๐‘ค๐‘’๐‘–๐‘”โ„Ž๐‘ก๐‘’๐‘‘_๐‘™๐‘œ๐‘ ๐‘ โˆ•๐‘ ๐‘ข๐‘š(๐œ†); ๎ˆบ โ€ฒ[๐‘].backward(๐‘ค๐‘’๐‘–๐‘”โ„Ž๐‘ก๐‘’๐‘‘_๐‘™๐‘œ๐‘ ๐‘ ); end foreach ๐‘“ โˆˆ ๎ˆบ โ€ฒ[๐‘] do if ๐‘“ .๐‘ก๐‘ฆ๐‘๐‘’ == ๐ถ๐‘‚๐‘๐‘‰ then ๐‘ก โ† the filters number of ๐‘“ ; Calculate the ๐“2-norm for the filters; Zeroize the lowest filters โŒŠ๐‘ก ร— ๐‘ƒ [๐‘].๐‘๐‘Ÿ๐‘ข๐‘›๐‘–๐‘›๐‘”_๐‘Ÿ๐‘Ž๐‘ก๐‘’โŒ‹ filters; 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 3.5. GA-based design space exploration To effectively combine the staging-based with the pruning-based ap- proximate strategies, the design space of CoAxNN includes the number of stages, the position of the stage, the threshold of the stage, and the pruning rate, which is a very large search space. When the number of stages is ๐œ, the search space for determining which stage is available is 2๐œ , the search space for thresholds is ๐‘„๐œ where ๐‘„ is the number
of stages, the position of the stage, the threshold of the stage, and the pruning rate, which is a very large search space. When the number of stages is ๐œ, the search space for determining which stage is available is 2๐œ , the search space for thresholds is ๐‘„๐œ where ๐‘„ is the number of candidate thresholds, and the search space for pruning rate is ๐‘…, which indicates the number of candidate pruning rates. The parameter configurations are searched independently, making the search space as large as 2๐œ ร— ๐‘„๐œ ร— ๐‘…. It is laborious to explore the large parameter space by brute force search. CoAxNN adopts the genetic algorithm for the design space exploration. Genetic algorithm [33] is inspired by biological evolution based on Charles Darwinโ€™s theory of natural selec- tion, which is often used to find the (near-)optimal solution in a large search space. In CoAxNN, the number of genes on each chromosome is 2 ร— (๐œ โˆ’ 1) + 1. For the first ๐œ โˆ’ 1 stages, CoAxNN uses two genes, one for
biological evolution based on Charles Darwinโ€™s theory of natural selec- tion, which is often used to find the (near-)optimal solution in a large search space. In CoAxNN, the number of genes on each chromosome is 2 ร— (๐œ โˆ’ 1) + 1. For the first ๐œ โˆ’ 1 stages, CoAxNN uses two genes, one for whether the stage is available, and the other for the threshold of the stage. In addition, CoAxNN also uses a gene to represent the pruning ratio. The fitness of the single individual is represented by a 2-tuple (๐‘Ž๐‘๐‘๐‘ข๐‘Ÿ๐‘Ž๐‘๐‘ฆ, ๐‘™๐‘Ž๐‘ก๐‘’๐‘›๐‘๐‘ฆ). GA-based DSE aims to increase accuracy and reduce latency, finding the (near-)optimal solutions for model performance. Algorithm 2 shows that how accuracy and latency are evaluated for individuals. It is given the test dataset ๎‰„, the number of stages ๐œ, and the chromosome set ๐‘ƒ . For each individual, CoAxNN obtains the model ๐‘›๐‘’๐‘ก configured with the corresponding gene (Line 4). Then, the test dataset ๎‰„ is predicted by the model, and the result of prediction
for individuals. It is given the test dataset ๎‰„, the number of stages ๐œ, and the chromosome set ๐‘ƒ . For each individual, CoAxNN obtains the model ๐‘›๐‘’๐‘ก configured with the corresponding gene (Line 4). Then, the test dataset ๎‰„ is predicted by the model, and the result of prediction ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก is got (Line 6). For each input sample, CoAxNN traverses all available stages and calculates the confidence ๐‘’ of corresponding output at this stage according to Eq. (3) (Lines 7โ€“12). If the confidence (๐‘’) is less than the confidence threshold (๐œ€๐‘–) of this stage, the prediction is ended, and the accuracy of the sample at this stage is added to the accuracy score (๐›ฟ[๐‘]) of the current individual (๐‘) (Lines 13โ€“16). The accuracy function returns 1 if the prediction is correct, and 0 otherwise. When the sample does not exit from the first ๐œ โˆ’ 1 stages, it must be exited from the ๐œth stage. Therefore, in the ๐œth stage, the accuracy is directly added to the accuracy score (๐›ฟ[๐‘]) (Lines 17โ€“19).
The accuracy function returns 1 if the prediction is correct, and 0 otherwise. When the sample does not exit from the first ๐œ โˆ’ 1 stages, it must be exited from the ๐œth stage. Therefore, in the ๐œth stage, the accuracy is directly added to the accuracy score (๐›ฟ[๐‘]) (Lines 17โ€“19). The evaluation of latency is similar to accuracy. CoAxNN evaluates the latency score (๐œ‡[๐‘]) using a similar manner as the accuracy score (๐›ฟ[๐‘]), which accumulates the latency of the backbone neural network and exit branches until the end of the prediction (Line 8โ€“10). For the latency, we test the original network with all possible exit branches attached on the target edge devices. The execution time of all operators is recorded. Finally, the average accuracy score (๐›ฟ) and average latency score (๐œ‡) for all individuals are obtained (Lines 23โ€“26). GA-based DSE gets the (near-)optimal solutions about the goal of accuracy and latency. Users choose the (near-)optimal solution among
Finally, the average accuracy score (๐›ฟ) and average latency score (๐œ‡) for all individuals are obtained (Lines 23โ€“26). GA-based DSE gets the (near-)optimal solutions about the goal of accuracy and latency. Users choose the (near-)optimal solution among them according to their requirements. If the accuracy requirement is high, the model with the least computation cost is selected under a triv- ial accuracy loss. If a certain accuracy loss can be tolerated, the model with greatly less computation cost is selected. Finally, unavailable branches and unimportant filters are removed to obtain an optimized neural network model. 4. Evaluation 4.1. Experimental setting end end end Evaluation Platforms. We conduct optimization with PaddlePaddle,2 an open-sourced deep learning framework, for neural network models on a server with Intel Xeon CPUs and an Nvidia A100 GPU. We evaluate 35 end 36 return ๎ˆบ โ€ฒ; 2 https://www.paddlepaddle.org.cn/en. JournalofSystemsArchitecture143(2023)1029786 G. Li et al.
an open-sourced deep learning framework, for neural network models on a server with Intel Xeon CPUs and an Nvidia A100 GPU. We evaluate 35 end 36 return ๎ˆบ โ€ฒ; 2 https://www.paddlepaddle.org.cn/en. JournalofSystemsArchitecture143(2023)1029786 G. Li et al. Algorithm 2: Performance Collection Input: test data: ๎‰„, the number of stages: ๐œ, chromosomes: ๐‘ƒ Output: accuracy for each configuration of neural network models: ๐›ฟ, latency for each configuration of neural network models: ๐œ‡ 1 for ๐‘ = 1 โ†’ ๐‘ƒ .๐‘ ๐‘–๐‘ง๐‘’() do 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 ๐›ฟ[๐‘] โ† 0; ๐œ‡[๐‘] โ† 0; ๐‘›๐‘’๐‘ก = getModel(๐‘ƒ [๐‘]); foreach (๐‘–๐‘›๐‘๐‘ข๐‘ก, ๐‘ก๐‘Ž๐‘Ÿ๐‘”๐‘’๐‘ก) โˆˆ ๎‰„ do ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก = ๐‘›๐‘’๐‘ก.forward(๐‘–๐‘›๐‘๐‘ข๐‘ก); for ๐‘– = 1 โ†’ ๐œ do ๐œ‡[๐‘] += computeLatency(๎ˆน if P[p][i] is available then ๐‘–); ๐œ‡[๐‘] += computeLatency(โ‹ƒ๐‘– if ๐‘– โ‰  ๐œ then ๎ˆฎ ๐‘— ); ๐‘—=1 ๐‘’ โ† Compute entropy of ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก[๐‘–]; if ๐‘’ < ๐œ€๐‘– then ๐›ฟ[๐‘] += accuracy(๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก[๐‘–], ๐‘ก๐‘Ž๐‘Ÿ๐‘”๐‘’๐‘ก); break; end else ๐›ฟ[๐‘] += accuracy(๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก[๐‘–], ๐‘ก๐‘Ž๐‘Ÿ๐‘”๐‘’๐‘ก); end end end end ๐›ฟ[๐‘] = ๐›ฟ[๐‘]โˆ•๎‰„.๐‘ ๐‘–๐‘ง๐‘’();
๐œ‡[๐‘] += computeLatency(๎ˆน if P[p][i] is available then ๐‘–); ๐œ‡[๐‘] += computeLatency(โ‹ƒ๐‘– if ๐‘– โ‰  ๐œ then ๎ˆฎ ๐‘— ); ๐‘—=1 ๐‘’ โ† Compute entropy of ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก[๐‘–]; if ๐‘’ < ๐œ€๐‘– then ๐›ฟ[๐‘] += accuracy(๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก[๐‘–], ๐‘ก๐‘Ž๐‘Ÿ๐‘”๐‘’๐‘ก); break; end else ๐›ฟ[๐‘] += accuracy(๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก[๐‘–], ๐‘ก๐‘Ž๐‘Ÿ๐‘”๐‘’๐‘ก); end end end end ๐›ฟ[๐‘] = ๐›ฟ[๐‘]โˆ•๎‰„.๐‘ ๐‘–๐‘ง๐‘’(); ๐œ‡[๐‘] = ๐œ‡[๐‘]โˆ•๎‰„.๐‘ ๐‘–๐‘ง๐‘’(); 25 end 26 return (๐›ฟ, ๐œ‡); the realistic speedup and energy consumption of optimized models on a representative intelligent edge platform, Jetson AGX Orin, integrated with Ampere GPUs and Arm Cortex CPUs. For the genetic algorithm, we adopted the OpenGA [34] and the NSGA-III [35]. Benchmark Datasets and Models. We demonstrate the effectiveness of our proposed method on the CIFAR [36] dataset and the CINIC- 10 [37] dataset. The CIFAR dataset, which consists of 50,000 images for training and 10,000 images for testing, contains two datasets: CIFAR-10 and CIFAR-100. The CIFAR-10 and CIFAR-100 datasets are categorized into 10 and 100 classes, respectively. CINIC-10 consisting of 27 000
10 [37] dataset. The CIFAR dataset, which consists of 50,000 images for training and 10,000 images for testing, contains two datasets: CIFAR-10 and CIFAR-100. The CIFAR-10 and CIFAR-100 datasets are categorized into 10 and 100 classes, respectively. CINIC-10 consisting of 27 000 images is split into three equal-sized train, validation, and test subsets and is categorized into 10 classes. We adopt the state-of-the-art residual neural network (ResNet) [1], which has less redundancy and is more challenging to be compressed and accelerated than conventional model structures, as model architectures. ResNet-20/32/56/110 models are evaluated for the CIFAR-10 dataset, ResNet-56/110 models are evalu- ated for the CIFAR-100 dataset and ResNet-18/50 models are evaluated for the CINIC-10 dataset. Hyper-parameters Setting. For staging-based approximation, we at- tach exit branches after each residual block by default for building a multi-stage model, and the weight of the loss function of each stage is
for the CINIC-10 dataset. Hyper-parameters Setting. For staging-based approximation, we at- tach exit branches after each residual block by default for building a multi-stage model, and the weight of the loss function of each stage is set to 1.0 by default. For pruning-based approximation, we follow the same data argumentation strategies and scheduling settings as [1]. 4.2. GA-based design space exploration The GA-based DSE, taking increasing accuracy and reducing latency as the goal, evaluates and sorts the solutions in the design space, and obtains the (near-)optimal solutions about accuracy and latency after multiple generations of individuals. After the process of survival of the fittest for multiple generations of individuals, the (near-)optimal solutions about accuracy and latency are obtained. Fig. 4 shows solutions, obtained by GA-based DSE, for ResNet-20, ResNet-32, ResNet-56, and ResNet-110 on the CIFAR-10 dataset. The ๐‘ฅ-axis and ๐‘ฆ-axis represent the normalized top-1 accuracy and latency,
solutions about accuracy and latency are obtained. Fig. 4 shows solutions, obtained by GA-based DSE, for ResNet-20, ResNet-32, ResNet-56, and ResNet-110 on the CIFAR-10 dataset. The ๐‘ฅ-axis and ๐‘ฆ-axis represent the normalized top-1 accuracy and latency, normalized to the top-1 accuracy and latency of the corresponding baseline model, respectively. The data, marked by the green dot, are the design points of the brute-force algorithm, and the data, marked by the red triangle, are the (near-)optimal results found by CoAxNN. The optimal solutions found by brute force are plotted by the boundary of the green and red regions. It can be observed that the (near- )optimal solutions searched by CoAxNN are close to this boundary, which demonstrates the effectiveness of CoAxNN. Therefore, CoAxNN can search for the model having the least computational cost in most cases and meeting the accuracy requirements by GA-based DSE. 4.3. Performance of optimized models
which demonstrates the effectiveness of CoAxNN. Therefore, CoAxNN can search for the model having the least computational cost in most cases and meeting the accuracy requirements by GA-based DSE. 4.3. Performance of optimized models We compare CoAxNN with state-of-the-art optimization methods such as ASRFP [38]. For the sake of fairness, the accuracy numbers are directly cited from their original papers. Different hyperparameters, such as learning rate, are used by distinct optimization methods, so the accuracy of the baseline model may be slightly different. Therefore, both the accuracy of the baseline model and the optimized model are shown in our experimental results, and โ€˜โ€˜ACC. Dropโ€™โ€™ is used to represent the accuracy dropping of the model after optimization. A smaller number of โ€˜โ€˜ACC. Dropโ€™โ€™ is better, and a negative number indicates the optimized model has higher accuracy than the baseline model. This is because model optimization has a regularization effect,
represent the accuracy dropping of the model after optimization. A smaller number of โ€˜โ€˜ACC. Dropโ€™โ€™ is better, and a negative number indicates the optimized model has higher accuracy than the baseline model. This is because model optimization has a regularization effect, which can reduce the overfitting of neural network models [2,18]. To avoid interference, we run each experiment three times and report the mean and standard deviation (mean ยฑstd) of accuracy. Besides, we employ FLOPs to quantify the computational costs of neural network models. 4.3.1. ResNets on CIFAR-10 Table 1 shows the accuracy and FLOPs of ResNet-20/32/56/110 on the CIFAR-10 dataset. CoAxNN reduces the computational complexity of the original neural network model while meeting the accuracy requirements. The optimized ResNet-20, ResNet-32, ResNet-56, and ResNet-110 by CoAxNN achieves the FLOPs reduction from 4.06E7, 6.89E7, 1.25E8, 2.53E8 (refer to Table 2) to 3.00E7, 4.89E7, 8.06E7,
of the original neural network model while meeting the accuracy requirements. The optimized ResNet-20, ResNet-32, ResNet-56, and ResNet-110 by CoAxNN achieves the FLOPs reduction from 4.06E7, 6.89E7, 1.25E8, 2.53E8 (refer to Table 2) to 3.00E7, 4.89E7, 8.06E7, 1.63E8, reduced by 25.94%, 28.93%, 35.76%, 35.57% in computa- tional complexity, with the accuracy loss of 0.67%, 0.84%, 0.74%, and 0.63%, respectively. Moreover, CoAxNN can exploit less computation to achieve top-1 accuracy that is comparable to other state-of-the-art model optimization methods. For example, ResNet-20 optimized by SFP demands the computational complexity of 2.43E7 FLOPs while reducing the top-1 accuracy by 1.37%. The optimized ResNet-20 by CoAxNN consumes less computation cost, i.e., 2.27E7 FLOPs, drops by 1.39% in top-1 accuracy. CoAxNN reduces the computational cost of ResNet-32 to 3.44E7 FLOPs with a 1.58% accuracy drop. MIL spends more computations (4.70E7 FLOPs), reducing the top-1 accuracy by
CoAxNN consumes less computation cost, i.e., 2.27E7 FLOPs, drops by 1.39% in top-1 accuracy. CoAxNN reduces the computational cost of ResNet-32 to 3.44E7 FLOPs with a 1.58% accuracy drop. MIL spends more computations (4.70E7 FLOPs), reducing the top-1 accuracy by 1.59%. The compressed ResNet-56 by SFP achieves the FLOPs reduction of 52.60% and the accuracy loss of 1.33%. CoAxNN decreases the computational cost of ResNet-56 by 54.88% with a 1.22% accuracy drop. The optimized ResNet-110 by GAL reduces FLOPs by 48.50% with a 0.81% drop in top-1 accuracy. CoAxNN achieves a similar accuracy loss (0.88%) while reducing the computational complexity by 62.09%. For original neural network models, CoAxNN automatically searches for a reasonable configuration to effectively optimize the computational complexity while meeting the accuracy requirements. For the same ac- curacy requirement, CoAxNN reduces more computations than existing methods, achieving less resource consumption.
for a reasonable configuration to effectively optimize the computational complexity while meeting the accuracy requirements. For the same ac- curacy requirement, CoAxNN reduces more computations than existing methods, achieving less resource consumption. We also analyze the FLOPs and the percentage of predicted images for different stages of optimized ResNet-20, ResNet-32, ResNet-56, JournalofSystemsArchitecture143(2023)1029787 G. Li et al. Fig. 4. The solutions with GA-based DSE on the CIFAR-10 dataset. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Table 1 Performance of optimized neural network models on CIFAR-10 (see [39โ€“43]). Model Method Top-1 Acc. Baseline (%) Top-1 Acc. Accelerated (%) Top-1 Acc. Drop (%) ResNet-20 ResNet-32 ResNet-56 ResNet-110 MIL [39] SFP [2] FPGM [40] TAS [41] CoAxNN (0.67%) CoAxNN (1.39%) MIL [39] SFP [2] TAS [41] CoAxNN (0.84%) CoAxNN (1.58%) SFP [2] ASFP [8] CP [42] AMC [29]
Model Method Top-1 Acc. Baseline (%) Top-1 Acc. Accelerated (%) Top-1 Acc. Drop (%) ResNet-20 ResNet-32 ResNet-56 ResNet-110 MIL [39] SFP [2] FPGM [40] TAS [41] CoAxNN (0.67%) CoAxNN (1.39%) MIL [39] SFP [2] TAS [41] CoAxNN (0.84%) CoAxNN (1.58%) SFP [2] ASFP [8] CP [42] AMC [29] CoAxNN (0.74%) CoAxNN (1.22%) SFP [2] ASRFP [38] TAS [41] GAL [43] CoAxNN (0.63%) CoAxNN (0.88%) 92.49 92.20 92.20 92.88 92.68 92.68 92.33 92.63 93.89 93.56 93.56 93.59 93.59 92.8 92.8 94.15 94.15 93.68 94.33 94.97 93.26 94.42 94.42 91.43 90.83 91.09 90.97 92.01 (ยฑ0.43) 91.29 (ยฑ0.26) 90.74 90.08 91.48 92.72 (ยฑ0.13) 91.98 (ยฑ0.41) 92.26 92.44 90.9 91.9 93.41 (ยฑ0.05) 92.93 (ยฑ0.25) 92.90 93.69 94.33 92.74 93.79 (ยฑ0.36) 93.54 (ยฑ0.17) 1.06 1.37 1.11 1.91 0.67 1.39 1.59 2.55 2.41 0.84 1.58 1.33 1.15 1.90 0.90 0.74 1.22 0.78 0.67 0.64 0.81 0.63 0.88 #FLOPs 2.61E7 2.43E7 2.43E7 2.19E7 3.00E7 2.27E7 4.70E7 4.03E7 4.08E7 4.89E7 3.44E7 5.94E7 5.94E7 โ€“ 6.29E7 8.06E7 5.66E7 1.21E8 1.21E8 1.19E8 โ€“ 1.63E8 9.59E7
93.54 (ยฑ0.17) 1.06 1.37 1.11 1.91 0.67 1.39 1.59 2.55 2.41 0.84 1.58 1.33 1.15 1.90 0.90 0.74 1.22 0.78 0.67 0.64 0.81 0.63 0.88 #FLOPs 2.61E7 2.43E7 2.43E7 2.19E7 3.00E7 2.27E7 4.70E7 4.03E7 4.08E7 4.89E7 3.44E7 5.94E7 5.94E7 โ€“ 6.29E7 8.06E7 5.66E7 1.21E8 1.21E8 1.19E8 โ€“ 1.63E8 9.59E7 FLOPs โ†“ (%) 36.00 42.20 42.20 46.20 25.94 44.02 31.20 41.50 41.00 28.93 49.98 52.60 52.60 50.00 50.00 35.76 54.88 52.30 52.30 53.00 48.50 35.57 62.09 and ResNet-110, with an accuracy loss of 0.67%, 0.84%, 0.74%, and 0.63%, respectively, as shown in the Table 2. Weighted average FLOPs (โ€˜โ€˜Avg. #FLOPsโ€™โ€™) are computed by exit percentage and exit FLOPs for each stage (e.g., 3.00E7 = 58.71% ร— 1.93E7 + 41.29% ร— 4.53E7), which indicates the average model performance on the entire dataset. CoAxNN employs distinct stages for different neural network models. The two stages are used for ResNet-20, and three stages are used for more complex ResNet-32, ResNet-56, and ResNet-110. The neural
which indicates the average model performance on the entire dataset. CoAxNN employs distinct stages for different neural network models. The two stages are used for ResNet-20, and three stages are used for more complex ResNet-32, ResNet-56, and ResNet-110. The neural network prediction finished at earlier stages costs less computational effort. Simple images, making up most of the dataset, are predicted by the first few stages, which reduces the computational complexity while ensuring accuracy. Besides, we show the configurations of the optimized models searched by the GA-based DSE, as shown in Table 3. The pruning rate, the number of stages, and the position and the threshold for each stage are reported. For the optimized ResNet-20, the pruning rate is 0, i.e., no pruning is performed, the number of stages is two, the position of the first stage is the end of the fourth residual block, the corresponding threshold is 0.3, and the second stage refers to the backbone neural
are reported. For the optimized ResNet-20, the pruning rate is 0, i.e., no pruning is performed, the number of stages is two, the position of the first stage is the end of the fourth residual block, the corresponding threshold is 0.3, and the second stage refers to the backbone neural network with no confidence threshold since images must be exited from JournalofSystemsArchitecture143(2023)1029788 G. Li et al. Table 2 Analysis of optimized models on CIFAR-10. Model (Acc.Drop) Stages CoAxNN Percentage #FLOPs Avg. #FLOPs Baseline #FLOPs FLOPs โ†“ (%) ResNet-20 (0.67%) ResNet-32 (0.84%) ResNet-56 (0.74%) ResNet-110 (0.63%) ๎ˆฟ ๎ˆฟ ๎ˆฟ ๎ˆฟ ๎ˆฟ ๎ˆฟ ๎ˆฟ ๎ˆฟ ๎ˆฟ ๎ˆฟ ๎ˆฟ 1 2 1 2 3 1 2 3 1 2 3 Table 3 Configurations optimized by GA-based DSE for CIFAR-10. Model (Acc.Drop) Configurations ResNet-20 (0.67%) ResNet-32 (0.84%) ResNet-56 (0.74%) ResNet-110 (0.63%) Rate Stage Position Threshold Rate Stage Position Threshold Rate Stage Position Threshold Rate Stage Position Threshold 0 1 4 0.3 0 1 6 0.09
Model (Acc.Drop) Configurations ResNet-20 (0.67%) ResNet-32 (0.84%) ResNet-56 (0.74%) ResNet-110 (0.63%) Rate Stage Position Threshold Rate Stage Position Threshold Rate Stage Position Threshold Rate Stage Position Threshold 0 1 4 0.3 0 1 6 0.09 0 1 10 0.07 0 1 19 0.07 58.71% 41.29% 41.72% 38.78% 19.50% 44.81% 36.79% 18.40% 42.66% 30.93% 26.41% 2 โ€“ โ€“ 2 11 0.1 2 19 0.08 2 37 0.015 3 โ€“ โ€“ 3 โ€“ โ€“ 3 โ€“ โ€“ 3 โ€“ โ€“ the last stage. Although ResNet-32, ResNet-56, and ResNet-110 are all optimized into three stages with a pruning rate of 0, the position and threshold of each stage are different. For the optimized ResNet-32, the threshold is 0.09, 0.1 for each stage where the position is the end of the 6, 11th residual block of the backbone network, respectively. For the optimized ResNet-56, the position of three stages is the end of the 10, 19th residual block with the threshold of 0.07, 0.08. The optimized ResNet-110 uses three-stage with the threshold of 0.07, 0.017, where
the 6, 11th residual block of the backbone network, respectively. For the optimized ResNet-56, the position of three stages is the end of the 10, 19th residual block with the threshold of 0.07, 0.08. The optimized ResNet-110 uses three-stage with the threshold of 0.07, 0.017, where the position is the end of the 19, 37th residual block. 4.3.2. ResNets on CIFAR-100 We evaluate CoAxNN on the CIFAR-100 dataset by ResNet-56 and ResNet-110, as shown in Table 4. Similarly, CoAxNN outperforms other state-of-the-art methods. For example, the computational complexity of optimized ResNet-110 by ASFP is 1.82E8 FLOPs, reduced by 28.20% compared to the original neural network model, leading to a 1.48% drop in top-1 accuracy. CoAxNN consumes 1.69E8 FLOPs, achieving a higher computation reduction of 33.34% and a lower accuracy loss of 1.30%. Although GHFP achieves a lower accuracy drop of 1.10%, it uses a higher computational complexity of 1.82E8 FLOPs. These results demonstrate the effectiveness of CoAxNN.
a higher computation reduction of 33.34% and a lower accuracy loss of 1.30%. Although GHFP achieves a lower accuracy drop of 1.10%, it uses a higher computational complexity of 1.82E8 FLOPs. These results demonstrate the effectiveness of CoAxNN. In addition, Table 6 shows the configurations of the optimized mod- els with the accuracy loss of 0.98% and 1.30%, searched by CoAxNN, on the CIFAR-100 dataset. Despite the optimized ResNet-56 employing a three-stage and deactivating pruning-based strategy, which is as same as CIFAR-10, the thresholds are distinct. The optimized ResNet-56 uses three-stage with the threshold of 0.7 and 0.065. Besides, The optimized ResNet-110 adopts three-stage with a pruning rate of 0.1. We also study the FLOPs and the percentage of predicted images of the optimized model at each stage on CIFAR-100, as shown in Table 5. CoAxNN uses three stages for the ResNet-56 and the ResNet-110 as 1.93E7 4.53E7 2.88E7 5.59E7 7.83E7 4.76E7 9.36E7 1.35E8 9.01E7 1.79E8 2.62E8 3.00E7 4.06E7
We also study the FLOPs and the percentage of predicted images of the optimized model at each stage on CIFAR-100, as shown in Table 5. CoAxNN uses three stages for the ResNet-56 and the ResNet-110 as 1.93E7 4.53E7 2.88E7 5.59E7 7.83E7 4.76E7 9.36E7 1.35E8 9.01E7 1.79E8 2.62E8 3.00E7 4.06E7 25.94 4.89E7 6.89E7 28.93 8.06E7 1.25E8 35.76 1.63E8 2.53E8 35.57 1 and ๎ˆฟ same as CIFAR-10. But, since the CIFAR-100 is more complex, more complex models are required, leading to a smaller percentage of images predicted at ๎ˆฟ 2 than CIFAR-10. For example, for ResNet-56 on CIFAR-10, the percentages of predicted images by ๎ˆฟ 3 are 44.81%, 36.79%, and 18.40%, respectively. For ResNet-56 on CIFAR- 100, the percentages of predicted images by ๎ˆฟ 3 are 29.67%, 32.85%, and 37.48%, respectively. For both CIFAR-10 and CIFAR-100, most of the images on the whole dataset are predicted by the first few stages with less computation. On the CIFAR-100, CoAxNN reduces the
100, the percentages of predicted images by ๎ˆฟ 3 are 29.67%, 32.85%, and 37.48%, respectively. For both CIFAR-10 and CIFAR-100, most of the images on the whole dataset are predicted by the first few stages with less computation. On the CIFAR-100, CoAxNN reduces the FLOPs by 23.93% and 33.34%, with an accuracy drop of 0.98% and 1.30%, for ResNet-56 and ResNet-110, respectively. 2, and ๎ˆฟ 2, and ๎ˆฟ 1, ๎ˆฟ 1, ๎ˆฟ 4.3.3. ResNets on CINIC-10 We utilize the CINIC-10 dataset, which consists of images from both CIFAR and ImageNet [46], avoiding the time-consuming process of model training on the entire ImageNet dataset, to facilitate experiments for complicated image classification scenarios. We evaluate CoAxNN on the CINIC-10 dataset by ResNet-18 and ResNet-50 models that are in line with the model structures on the ImageNet dataset. Table 7 shows the accuracy and computational cost of optimized models. For ResNet-18, when the FLOPs are reduced from 5.49E8
the CINIC-10 dataset by ResNet-18 and ResNet-50 models that are in line with the model structures on the ImageNet dataset. Table 7 shows the accuracy and computational cost of optimized models. For ResNet-18, when the FLOPs are reduced from 5.49E8 (i.e., the computational cost of the original ResNet-18, refer to Table 8) to 2.21E8, reduced by 59.80%, the top-1 accuracy is dropped by 1.01%. If the accuracy requirement is higher, CoAxNN can achieve 0.50% accuracy loss while reducing the computational complexity by 43.71% for the ResNet-18. ResNet-50 with a large number of computations is improved by 0.10% in top-1 accuracy, and the corresponding FLOPs is reduced from 1.18E9 (i.e., the computational cost of the original ResNet-50, refer to Table 8) to 4.63E8, reduced by 60.75% in compu- tational complexity. We compare CoAxNN with state-of-the-art model optimization methods, FPC [47] and CCPrune [48]. FPC reduces the computational complexity by 40.48% (7.76E8 FLOPs) while increas-
ResNet-50, refer to Table 8) to 4.63E8, reduced by 60.75% in compu- tational complexity. We compare CoAxNN with state-of-the-art model optimization methods, FPC [47] and CCPrune [48]. FPC reduces the computational complexity by 40.48% (7.76E8 FLOPs) while increas- ing the top-1 accuracy by 1.14% for the ResNet-50 model. CCPrune increases the top-1 accuracy of the ResNet-50 model by 0.23% with a computational complexity of 7.44E8 FLOPs. CoAxNN reduces the computational complexity by 49.73% (5.93E8 FLOPs) with a 0.38% improvement in top-1 accuracy. By effectively combining stage-based with pruning-based approximate strategies, CoAxNN achieves better performance than existing methods. Moreover, we analyze the FLOPs and predicted images at each stage for the optimized ResNet-18 and ResNet-50 with a 1.01% and โˆ’0.10% accuracy drop respectively, as shown in Table 8. For the CINIC-10 dataset, both the ResNet-18 and the ResNet-50 use four stages. More
Moreover, we analyze the FLOPs and predicted images at each stage for the optimized ResNet-18 and ResNet-50 with a 1.01% and โˆ’0.10% accuracy drop respectively, as shown in Table 8. For the CINIC-10 dataset, both the ResNet-18 and the ResNet-50 use four stages. More than 80% of the images are finished in the previous two stages, and less than 10% of images are predicted in the last stage. Table 9 shows the configurations of the ResNet-18 and the ResNet- 50. ResNet-18 uses four-stage with thresholds of 0.23, 0.2, and 0.4, whose position is the end of the 3, 5, and 7th residual block, and the pruning rate is 0.3. When the sample does not exit from the first few stages, it must be exited from the last stage. Therefore, the last stage JournalofSystemsArchitecture143(2023)1029789 G. Li et al. Table 4 Performance of optimized neural network models on CIFAR-100 (see [44,45]). Model Method Top-1 Acc. Baseline (%) Top-1 Acc. Accelerated (%) Top-1 Acc. Drop (%) ResNet-56 ResNet-110 MIL [39] CoAxNN (0.98%)
JournalofSystemsArchitecture143(2023)1029789 G. Li et al. Table 4 Performance of optimized neural network models on CIFAR-100 (see [44,45]). Model Method Top-1 Acc. Baseline (%) Top-1 Acc. Accelerated (%) Top-1 Acc. Drop (%) ResNet-56 ResNet-110 MIL [39] CoAxNN (0.98%) CoAxNN (2.36%) MIL [39] SFP [2] ASFP [8] ASRFP [38] GHFP [44] AHSG-HT [45] CoAxNN (1.30%) CoAxNN (3.42%) 71.33 72.75 72.75 72.79 74.14 74.39 74.39 74.39 74.46 74.17 74.17 68.37 71.77 (ยฑ0.28) 70.39 (ยฑ0.11) 70.78 71.28 72.91 73.02 73.29 72.74 72.87 (ยฑ0.19) 70.75 (ยฑ0.38) 2.96 0.98 2.36 2.01 2.86 1.48 1.37 1.10 1.72 1.30 3.42 #FLOPs 7.63E7 9.55E7 7.46E7 1.73E8 1.21E8 1.82E8 1.82E8 1.82E8 โ€“ 1.69E8 1.15E8 FLOPs โ†“ (%) 39.30 23.93 40.53 31.30 52.30 28.20 28.20 28.20 29.30 33.34 54.47 Table 5 Analysis of optimized models on CIFAR-100. Model (Acc.Drop) Stages CoAxNN Percentage #FLOPs Avg. #FLOPs Baseline #FLOPs FLOPs โ†“ (%) ResNet-56 (0.98%) ResNet-110 (1.30%) ๎ˆฟ 1 ๎ˆฟ 2 ๎ˆฟ 3 ๎ˆฟ 1 ๎ˆฟ 2 ๎ˆฟ 3 Table 6
(%) 39.30 23.93 40.53 31.30 52.30 28.20 28.20 28.20 29.30 33.34 54.47 Table 5 Analysis of optimized models on CIFAR-100. Model (Acc.Drop) Stages CoAxNN Percentage #FLOPs Avg. #FLOPs Baseline #FLOPs FLOPs โ†“ (%) ResNet-56 (0.98%) ResNet-110 (1.30%) ๎ˆฟ 1 ๎ˆฟ 2 ๎ˆฟ 3 ๎ˆฟ 1 ๎ˆฟ 2 ๎ˆฟ 3 Table 6 Configurations optimized by GA-based DSE for CIFAR-100. Model (Acc.Drop) Configurations ResNet-56 (0.98%) ResNet-110 (1.30%) Rate Stage Position Threshold Rate Stage Position Threshold 0 1 10 0.7 0.1 1 19 0.73 29.67% 32.85% 37.48% 27.60% 30.18% 42.22% 2 19 0.65 2 37 0.62 3 โ€“ โ€“ 3 โ€“ โ€“ has no threshold value. The ResNet-34 uses four-stage with thresholds of 0.08, 0.09, and 0.09, whose position is the end of the 4, 8, and 14th residual block, and the pruning rate is 0.2. Summary. As shown in Tables 1, 4, and 7, CoAxNN, which auto- matically finds (near)-optimal configurations for effectively combining staging-based and pruning-based approximate strategies, is comparable
residual block, and the pruning rate is 0.2. Summary. As shown in Tables 1, 4, and 7, CoAxNN, which auto- matically finds (near)-optimal configurations for effectively combining staging-based and pruning-based approximate strategies, is comparable to the state-of-the-art methods. The staging-based approximate strate- gies perform adaptive inference for inputs according to conditions at run-time. The inference of simple input can be terminated with a good prediction confidence in the earlier stage, thereby avoiding remaining layerwise computations, so that the overall computation cost can be significantly reduced. However, the number of model parameters is still too large to be deployed on mobile devices. The pruning-based approximate strategies remove the unimportant weights or filters to gain a thinner model. However, the pruning method lacks the ability to configure the neural network dynamically, which will miss the opportunities to optimize the model inference. Based on these previ-
approximate strategies remove the unimportant weights or filters to gain a thinner model. However, the pruning method lacks the ability to configure the neural network dynamically, which will miss the opportunities to optimize the model inference. Based on these previ- ous mentioned optimization principles, CoAxNN automatically finds (near-)optimal configurations by GA-based DSE, making full use of the advantages of both, thus achieving efficient model optimization. 4.4. Realistic performance of on-device inference To demonstrate the realistic speedup and energy savings of our approximate compressed multi-stage models, we evaluate the perfor- mance of models on a representative intelligent edge device, Jetson AGX Orin. 4.76E7 9.36E7 1.35E8 8.23E7 1.59E8 2.32E8 9.55E7 1.25E8 23.93 1.69E8 2.53E8 33.34 For the measurement of inference latency, on the one hand, we pre- execute each neural network model 10 times to warm up the machine, and then repeat the single-batch inference 100 times to record the
4.76E7 9.36E7 1.35E8 8.23E7 1.59E8 2.32E8 9.55E7 1.25E8 23.93 1.69E8 2.53E8 33.34 For the measurement of inference latency, on the one hand, we pre- execute each neural network model 10 times to warm up the machine, and then repeat the single-batch inference 100 times to record the average execution time to reduce the interference, such as system initialization. On the other hand, after executing all the operators on the device we insert synchronous instructions to obtain timestamps, thus avoiding inaccurate measurements for inference time. Table 10 depicts the inference latency for the optimized ResNet-20, ResNet- 32, ResNet-56, and ResNet-110 by CoAxNN, respectively dropped by 0.67%, 0.84%, 0.74%, and 0.63% in top-1 accuracy on the CIFAR- 10 dataset. The results show that CoAxNN can accelerate ResNet-20, ResNet-32, ResNet-56, and ResNet-110 models by 1.33ร—, 1.34ร—, 1.53ร—, and 1.51ร—, respectively. In general, the larger models can obtain a more significant speedup.
0.67%, 0.84%, 0.74%, and 0.63% in top-1 accuracy on the CIFAR- 10 dataset. The results show that CoAxNN can accelerate ResNet-20, ResNet-32, ResNet-56, and ResNet-110 models by 1.33ร—, 1.34ร—, 1.53ร—, and 1.51ร—, respectively. In general, the larger models can obtain a more significant speedup. To analyze the energy consumption of optimized models, we use the jetson-stats3 to monitor the power of the system. We per- form 10 000 times single-batch inference for ResNet-20, ResNet-32, ResNet-56, and ResNet-110 on Jetson AGX Orin, and the instantaneous powers are obtained to multiply the average inference time per image to compute the energy consumption of models. Table 11 shows the energy consumption for ResNet-20, ResNet-32, ResNet-56, and ResNet- 110 with the accuracy loss of 0.67%, 0.84%, 0.74%, and 0.63% on the CIFAR-10 dataset. CoAxNN reduces the energy consumption of ResNet-20, ResNet-32, ResNet-56, and ResNet-110 by 25.17%, 25.68%, 34.61%, and 33.81%, respectively. The experimental results show that
110 with the accuracy loss of 0.67%, 0.84%, 0.74%, and 0.63% on the CIFAR-10 dataset. CoAxNN reduces the energy consumption of ResNet-20, ResNet-32, ResNet-56, and ResNet-110 by 25.17%, 25.68%, 34.61%, and 33.81%, respectively. The experimental results show that the optimized models by CoAxNN can be improved in terms of energy consumption, and the more complex neural network models can save more energy. We also evaluate the realistic speedup and energy reduction of mod- els optimized by existing filter pruning approaches [2,8,11]. Tables 12 and 13 show the execute latency and energy consumption of single- batch inference of optimized ResNet-20, ResNet-32, ResNet-56, and ResNet-110 by filter pruning with the accuracy loss of 2.32%, 1.12%, 0.23%, and 0.10%, on the CIFAR-10 dataset, respectively. Compared with the baseline models, the optimized models have higher execution latency and more energy consumption. Although filter pruning can reduce theoretical computation costs and memory footprint, the opti-
0.23%, and 0.10%, on the CIFAR-10 dataset, respectively. Compared with the baseline models, the optimized models have higher execution latency and more energy consumption. Although filter pruning can reduce theoretical computation costs and memory footprint, the opti- mized models cannot obtain actual acceleration and energy reduction 3 https://pypi.org/project/jetson-stats/. JournalofSystemsArchitecture143(2023)10297810 G. Li et al. Table 7 Performance of optimized neural network models on CINIC-10. Model Method Top-1 Acc. Baseline (%) Top-1 Acc. Accelerated (%) Top-1 Acc. Drop (%) ResNet-18 ResNet-50 CoAxNN (0.50%) CoAxNN (1.01%) FPC [47] CCPrune [48] CoAxNN (โˆ’0.38%) CoAxNN (โˆ’0.10%) 87.57 87.57 86.63 88.30 88.52 88.52 87.07 (ยฑ0.29) 86.56 (ยฑ0.43) 87.77 88.53 88.14 (ยฑ0.15) 88.62 (ยฑ0.34) 0.50 1.01 โˆ’1.14 โˆ’0.23 โˆ’0.38 โˆ’0.10 #FLOPs 3.09E8 2.21E8 7.76E8 7.44E8 5.93E8 4.63E8 FLOPs โ†“ (%) 43.71 59.80 40.48 โ€“ 49.73 60.75 Table 8 Analysis of optimized models on CINIC-10. Model (Acc.Drop)
86.63 88.30 88.52 88.52 87.07 (ยฑ0.29) 86.56 (ยฑ0.43) 87.77 88.53 88.14 (ยฑ0.15) 88.62 (ยฑ0.34) 0.50 1.01 โˆ’1.14 โˆ’0.23 โˆ’0.38 โˆ’0.10 #FLOPs 3.09E8 2.21E8 7.76E8 7.44E8 5.93E8 4.63E8 FLOPs โ†“ (%) 43.71 59.80 40.48 โ€“ 49.73 60.75 Table 8 Analysis of optimized models on CINIC-10. Model (Acc.Drop) Stages CoAxNN Percentage #FLOPs Avg. #FLOPs Baseline #FLOPs FLOPs โ†“ (%) ResNet-18 (1.01%) ResNet-50 (โˆ’0.10%) ๎ˆฟ 1 ๎ˆฟ 2 ๎ˆฟ 3 ๎ˆฟ 4 ๎ˆฟ 1 ๎ˆฟ 2 ๎ˆฟ 3 ๎ˆฟ 4 50.86% 30.96% 13.13% 5.05% 39.90% 41.58% 9.27% 9.26% 1.35E8 2.57E8 3.79E8 4.56E8 2.11E8 4.91E8 8.66E8 1.02E9 2.21E8 5.49E8 59.80 4.63E8 1.18E9 60.75 Table 9 Configurations optimized by GA-based DSE for CINIC-10. Model (Acc.Drop) Configurations ResNet-18 (1.01%) ResNet-50 (โˆ’0.10%) Rate Stage Position Threshold Rate Stage Position Threshold 0.3 1 3 0.23 0.2 1 4 0.08 2 5 0.2 2 8 0.09 3 7 0.4 3 14 0.09 4 โ€“ โ€“ 4 โ€“ โ€“ Table 12 Speedups of optimized models by existing pruning approaches [2,8,11] on Jetson AGX Orin. Model (Acc.Drop) Latency (ms)
ResNet-50 (โˆ’0.10%) Rate Stage Position Threshold Rate Stage Position Threshold 0.3 1 3 0.23 0.2 1 4 0.08 2 5 0.2 2 8 0.09 3 7 0.4 3 14 0.09 4 โ€“ โ€“ 4 โ€“ โ€“ Table 12 Speedups of optimized models by existing pruning approaches [2,8,11] on Jetson AGX Orin. Model (Acc.Drop) Latency (ms) Speedup Baseline Filter pruning ResNet-20 (2.32%) ResNet-32 (1.12%) ResNet-56 (0.23%) ResNet-110 (0.10%) 6.26 9.55 16.89 32.33 8.70 13.73 22.51 42.59 0.72 0.70 0.75 0.76 Table 10 Speedups of optimized models by CoAxNN on Jetson AGX Orin. Model (Acc.Drop) Latency (ms) Speedup Baseline CoAxNN Table 13 Energy reductions of optimized models by existing pruning approaches [2,8,11] on Jetson AGX Orin. Model (Acc.Drop) Energy (mJ) Reduction ResNet-20 (0.67%) ResNet-32 (0.84%) ResNet-56 (0.74%) ResNet-110 (0.63%) 6.26 9.55 16.89 32.33 4.69 7.11 11.05 21.4 1.33 1.34 1.53 1.51 ResNet-20 (2.32%) ResNet-32 (1.12%) ResNet-56 (0.23%) ResNet-110 (0.10%) Baseline 27.89 42.79 76.57
Energy (mJ) Reduction ResNet-20 (0.67%) ResNet-32 (0.84%) ResNet-56 (0.74%) ResNet-110 (0.63%) 6.26 9.55 16.89 32.33 4.69 7.11 11.05 21.4 1.33 1.34 1.53 1.51 ResNet-20 (2.32%) ResNet-32 (1.12%) ResNet-56 (0.23%) ResNet-110 (0.10%) Baseline 27.89 42.79 76.57 Filter pruning 45.74 72.47 โˆ’63.99% โˆ’69.36% 119.24 โˆ’55.73% 146.69 225.34 โˆ’53.61% Table 11 Energy reductions of optimized models by CoAxNN on Jetson AGX Orin. Model (Acc.Drop) Energy (mJ) Reduction ResNet-20 (0.67%) ResNet-32 (0.84%) ResNet-56 (0.74%) ResNet-110 (0.63%) Baseline 27.89 42.79 76.57 146.69 CoAxNN 20.87 31.8 50.07 97.10 25.17% 25.68% 34.61% 33.81% Fig. 5. Accuracy of the optimization model at different stages. โ€˜โ€˜CoAxNN-ALLโ€™โ€™ and โ€˜โ€˜CoAxNN-ACTโ€™โ€™ denote the accuracy of the model at each stage on the whole dataset and on the images that satisfy the activation condition of the corresponding stage, respectively. JournalofSystemsArchitecture143(2023)10297811 G. Li et al.
โ€˜โ€˜CoAxNN-ACTโ€™โ€™ denote the accuracy of the model at each stage on the whole dataset and on the images that satisfy the activation condition of the corresponding stage, respectively. JournalofSystemsArchitecture143(2023)10297811 G. Li et al. Fig. 6. Example images predicated correctly at different stages. Table 14 Overheads of GA-based DSE. Model ResNet-20 ResNet-32 ResNet-56 ResNet-110 GA time (s) Training time (s) 1.15 1.60 1.69 1.46 5472 1813 2720 7712 on Jetson AGX Orin. Therefore, the critical motivation of CoAxNN is to find a satisfying optimization configuration for practical scenarios. 4.5. Ablation study Accuracy of CoAxNN models at different stages. We study the accu- racy of ResNet-56 optimized by CoAxNN at different stages, as shown in Fig. 5. In โ€˜โ€˜CoAxNN-ALLโ€™โ€™, the accuracy of the model in the first few stages is lower than that of the baseline model. As the computational complexity of the model increases, the accuracy in the later stages
racy of ResNet-56 optimized by CoAxNN at different stages, as shown in Fig. 5. In โ€˜โ€˜CoAxNN-ALLโ€™โ€™, the accuracy of the model in the first few stages is lower than that of the baseline model. As the computational complexity of the model increases, the accuracy in the later stages gradually converges to that of the baseline model. CoAxNN separates the prediction of simple and complex images by conditional activation, allowing simple images to exit from the first few stages and complex images to exit from the latter stages. In โ€˜โ€˜CoAxNN-ACTโ€™โ€™, the accuracy of the first few stages becomes higher and even exceeds that of the baseline model, which indicates that the first few stages have sufficient ability to classify simple images. Besides, since complex images are predicted by the later stages, the accuracy of the last stage of the optimization model is lower than that of the baseline model. Visualization results at different stages. Fig. 6 depicts the predicted
ability to classify simple images. Besides, since complex images are predicted by the later stages, the accuracy of the last stage of the optimization model is lower than that of the baseline model. Visualization results at different stages. Fig. 6 depicts the predicted sample images for each stage of optimized ResNet-56 on CIFAR-10. The samples predicated at stage ๎ˆฟ 1 are relatively โ€˜โ€˜easyโ€™โ€™, which have a small number of objects and clear background, whereas the samples predicated at stage ๎ˆฟ 3 are relatively โ€˜โ€˜hardโ€™โ€™, which have various objects and complex background. CoAxNN can separate โ€˜โ€˜easyโ€™โ€™ images consuming less effort from โ€˜โ€˜hardโ€™โ€™ ones consuming more computation, significantly reducing computation costs for neural network models. 2 and ๎ˆฟ Overheads of GA-based DSE. We collect the latency of each operator of the neural network model on the edge device in the profiling phase beforehand to be used in GA-based search. We perform the model
significantly reducing computation costs for neural network models. 2 and ๎ˆฟ Overheads of GA-based DSE. We collect the latency of each operator of the neural network model on the edge device in the profiling phase beforehand to be used in GA-based search. We perform the model optimization processes, including model training and GA-based search, on a server with Intel Xeon CPUs and an Nvidia A100 GPU. The inference of optimized models is performed on edge devices such as Jetson AGX Orin. Table 14 shows the times for GA-based search and the time to train the model once during the model optimization. The GA-based DSE takes 1โ€“2 s on the CPU platform, which is greatly less than model training (e.g., ResNet-20 takes 5472 s for training once). Therefore, the runtime overhead of the GA is negligible. 5. Discussion Generality. CoAxNN is a generic framework for optimizing on-device deep learning via model approximation, which can be generalized to other intelligent tasks such as object detection [49]. In addition,
Therefore, the runtime overhead of the GA is negligible. 5. Discussion Generality. CoAxNN is a generic framework for optimizing on-device deep learning via model approximation, which can be generalized to other intelligent tasks such as object detection [49]. In addition, more approximate strategies such as knowledge distillation [50] can be integrated into CoAxNN to further optimize neural network models. Applicability. CoAxNN is system-independent, which not requires spe- cific software implementations and hardware design support. The op- timized models by CoAxNN can be directly deployed on the target platform, especially intelligent edge accelerators. Users can choose the (near)-optimal model according to the accuracy and performance requirements of intelligent tasks. Moreover, the time-consuming opti- mization process can be performed offline on high-performance servers, achieving efficient fine-tuning. Limitations. Although CoAxNN shows the advantages of combining
requirements of intelligent tasks. Moreover, the time-consuming opti- mization process can be performed offline on high-performance servers, achieving efficient fine-tuning. Limitations. Although CoAxNN shows the advantages of combining staging-based with pruning-based approximate strategies for model optimization, there is still room for further improvement. On one hand, the NSGA-III used in GA-based DSE cannot always find the optimal solutions for the goals of increasing accuracy and decreasing latency. We will explore other genetic algorithms such as NPGA [51] for multi- objective optimization. On the other hand, the fixed-rate filter pruning strategy is used in CoAxNN. Prior works [11] demonstrated that differ- ent layers have different sensitives for model accuracy. Setting different pruning ratios for different layers can potentially further improve the performance, which will be explored in future studies. 6. Conclusion In this paper, we proposed an efficient optimization framework,
ent layers have different sensitives for model accuracy. Setting different pruning ratios for different layers can potentially further improve the performance, which will be explored in future studies. 6. Conclusion In this paper, we proposed an efficient optimization framework, CoAxNN, which effectively combines staging-based with pruning-based approximate strategies for efficient model inference on resource- constrained edge devices. Evaluation with state-of-the-art CNN models demonstrates the effectiveness of CoAxNN, which can significantly im- prove the performance with trivial accuracy loss. We plan to integrate more model approximate strategies into CoAxNN in future work. Declaration of competing interest The authors declare that they have no known competing finan- cial interests or personal relationships that could have appeared to influence the work reported in this paper. Data availability Data will be made available on request. JournalofSystemsArchitecture143(2023)10297812 G. Li et al.
The authors declare that they have no known competing finan- cial interests or personal relationships that could have appeared to influence the work reported in this paper. Data availability Data will be made available on request. JournalofSystemsArchitecture143(2023)10297812 G. Li et al. Acknowledgments This work is supported by the National Key R&D Program of China (2021ZD0110101), the National Natural Science Foundation of China (62232015, 62302479), the China Postdoctoral Science Foundation (2023M733566), and the CCF-Baidu Open Fund, China.
Reference [1]: K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770โ€“778.
Reference [2]: Y. He, G. Kang, X. Dong, Y. Fu, Y. Yang, Soft filter pruning for accelerating in: Proceedings of the Twenty-Seventh Intelligence (IJCAI), 2018, pp. deep convolutional neural networks, International Joint Conference on Artificial 2234โ€“2240.
Reference [3]: S.K. Esser, J.L. McKinstry, D. Bablani, R. Appuswamy, D.S. Modha, Learned step size quantization, in: International Conference on Learning Representations, 2020.
Reference [4]: Y. Guo, A. Yao, Y. Chen, Dynamic network surgery for efficient dnns, in: Advances in Neural Information Processing Systems, Vol. 29, 2016.
Reference [5]: S. Han, J. Pool, J. Tran, W. Dally, Learning both weights and connections for efficient neural network, in: Advances in Neural Information Processing Systems, Vol. 28, 2015.