File size: 108,347 Bytes
5c16d69 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 |
text
"Contents lists available at ScienceDirect
Journal of Systems Architecture
journal homepage: www.elsevier.com/locate/sysarc
CoAxNN: Optimizing on-device deep learning with conditional approximate
neural networks
Guangli Li a,b,1, Xiu Ma c,d,1, Qiuchu Yu a,b, Lei Liu c,d, Huaxiao Liu c,d, Xueying Wang a,b,∗
a State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
b University of Chinese Academy of Sciences, Beijing, China
c College of Computer Science and Technology, Jilin University, Changchun, China
d MOE Key Laboratory of Symbolic Computation and Knowledge Engineering, Jilin University, Changchun, China
A R T I C L E I N F O
A B S T R A C T
Keywords:
On-device deep learning
Efficient neural networks
Model approximation and optimization
While deep neural networks have achieved superior performance in a variety of intelligent applications, the
increasing computational complexity makes them difficult to be deployed on resource-constrained devices. To"
"Efficient neural networks
Model approximation and optimization
While deep neural networks have achieved superior performance in a variety of intelligent applications, the
increasing computational complexity makes them difficult to be deployed on resource-constrained devices. To
improve the performance of on-device inference, prior studies have explored various approximate strategies,
such as neural network pruning, to optimize models based on different principles. However, when combining
these approximate strategies, a large parameter space needs to be explored. Meanwhile, different configuration
parameters may interfere with each other, damaging the performance optimization effect. In this paper, we
propose a novel model optimization framework, CoAxNN, which effectively combines different approximate
strategies, to facilitate on-device deep learning via model approximation. Based on the principles of different"
"propose a novel model optimization framework, CoAxNN, which effectively combines different approximate
strategies, to facilitate on-device deep learning via model approximation. Based on the principles of different
approximate optimizations, our approach constructs the design space and automatically finds reasonable
configurations through genetic algorithm-based design space exploration. By combining the strengths of
different approximation methods, CoAxNN enables efficient conditional inference for models at runtime. We
evaluate our approach by leveraging state-of-the-art neural networks on a representative intelligent edge
platform, Jetson AGX Orin. The experimental results demonstrate the effectiveness of CoAxNN, which achieves
up to 1.53× speedup while reducing energy by up to 34.61%, with trivial accuracy loss on CIFAR-10/100 and
CINIC-10 datasets.
1. Introduction
Convolutional neural networks (CNNs) have achieved remarkable
success in various intelligent tasks such as image classification [1]."
"up to 1.53× speedup while reducing energy by up to 34.61%, with trivial accuracy loss on CIFAR-10/100 and
CINIC-10 datasets.
1. Introduction
Convolutional neural networks (CNNs) have achieved remarkable
success in various intelligent tasks such as image classification [1].
To pursue superior performance on complex intelligent tasks, CNNs
are becoming wider and deeper, leading to tremendous computational
costs and expensive energy consumption for model execution. Recently,
on-device deep learning has been a mainstay due to its immeasurable
potential for privacy protection and real-time response. However, it is
hard to deploy complicated neural network models on edge devices due
to the limited resources.
Many efforts have been made to enable efficient on-device deep
learning via model approximation. For instance, pruning-based strate-
gies [2] compress a neural network model by reducing redundant
neurons and connections and quantization-based methods [3] improve"
"to the limited resources.
Many efforts have been made to enable efficient on-device deep
learning via model approximation. For instance, pruning-based strate-
gies [2] compress a neural network model by reducing redundant
neurons and connections and quantization-based methods [3] improve
the efficiency of model execution by leveraging low-precision compu-
tations. In addition to these model compression techniques, emerging
staging-based approximate strategies, such as early exiting, improve
model performance by conditional execution at runtime.
While these methods optimize the deep neural network models from
different directions, we found that it is still a challenging problem
to effectively combine them (as described in Section 2.4). To achieve
efficient on-device inference, it is needed to take full advantage of the
superiority of different optimization strategies. Different approximate
strategies, based on distinct principles, have their own configuration"
"to effectively combine them (as described in Section 2.4). To achieve
efficient on-device inference, it is needed to take full advantage of the
superiority of different optimization strategies. Different approximate
strategies, based on distinct principles, have their own configuration
parameters. When combining different strategies, the configuration
parameters of different strategies may affect each other, influencing the
optimization effect of the model, and even leading to poor optimization
results. As such, this paper aims to address the following challenging
problem: How to design an efficient model optimization framework to make
∗ Corresponding author at: State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China.
E-mail addresses:
[email protected] (G. Li), [email protected] (X. Ma), [email protected] (Q. Yu), [email protected] (L. Liu),
[email protected] (H. Liu), [email protected] (X. Wang)."
"E-mail addresses:
[email protected] (G. Li), [email protected] (X. Ma), [email protected] (Q. Yu), [email protected] (L. Liu),
[email protected] (H. Liu), [email protected] (X. Wang).
1 Guangli Li and Xiu Ma contributed equally to this work.
https://doi.org/10.1016/j.sysarc.2023.102978
Received 24 April 2023; Received in revised form 18 July 2023; Accepted 23 August 2023
JournalofSystemsArchitecture143(2023)102978Availableonline25August20231383-7621/©2023ElsevierB.V.Allrightsreserved.G. Li et al.
full use of various model approximate strategies, so as to optimize on-device
deep learning while meeting accuracy requirements?
In this paper, we present a novel neural network optimization
framework, CoAxNN (Conditional Approximate Neural Networks),
which effectively combines staging-based and pruning-based approx-
imate strategies, for efficient on-device deep learning. The staging-
based approximate strategy optimizes the model structure as multiple"
"framework, CoAxNN (Conditional Approximate Neural Networks),
which effectively combines staging-based and pruning-based approx-
imate strategies, for efficient on-device deep learning. The staging-
based approximate strategy optimizes the model structure as multiple
stages with different complexities by attaching multiple exit branches,
whereas the pruning-based approximate strategy compresses the model
parameters according to the importance of filters. CoAxNN takes ac-
count of both their optimization principles and automatically searches
for reasonable configuration parameters to construct a compressed
multi-stage neural network model, thus taking full advantage of the
superiority of different approximate strategies to achieve efficient
model optimization. The optimization techniques, including pruning
and staging, have been studied individually in the past; the key novelty
of our work is to provide an effective and efficient mechanism to
combine them, so as to optimize the neural network performance with"
"model optimization. The optimization techniques, including pruning
and staging, have been studied individually in the past; the key novelty
of our work is to provide an effective and efficient mechanism to
combine them, so as to optimize the neural network performance with
a reasonable configuration, for a given task and a platform.
The main contributions of this paper are as follows:
• We present a novel neural network optimization framework,
namely CoAxNN, which effectively combines staging-based and
pruning-based approximate strategies, thereby improving actual
performance while meeting accuracy requirements, for efficient
on-device model inference.
• According to the principles of staging-based and pruning-based
approximate strategies, our framework constructs the design
space, and automatically searches for reasonable configuration
parameters, including the number of stages, the position of stages,
the threshold of stages, and the pruning rate, so as to make"
"approximate strategies, our framework constructs the design
space, and automatically searches for reasonable configuration
parameters, including the number of stages, the position of stages,
the threshold of stages, and the pruning rate, so as to make
full use of the advantages of both to achieve efficient model
optimization.
• We validate the effectiveness of CoAxNN by optimizing state-of-
the-art deep neural networks on a commercial edge device, Jetson
AGX Orin, in terms of prediction accuracy, execution latency, and
energy consumption, and experimental results show that CoAxNN
can significantly improve the performance of model inference
with trivial accuracy loss.
The rest of the paper is organized as follows. The background and
motivation are introduced in Section 2. The details of our optimization
framework are described in Section 3. The experimental evaluation
is conducted in Section 4. A discussion is given in Section 5. The
conclusion is presented in Section 6.
2. Background and motivation"
"motivation are introduced in Section 2. The details of our optimization
framework are described in Section 3. The experimental evaluation
is conducted in Section 4. A discussion is given in Section 5. The
conclusion is presented in Section 6.
2. Background and motivation
2.1. Pruning-based approximation
Neural network pruning, one of the most representative model com-
pression techniques, approximates the original neural network model
by reducing redundant neurons or connections making less contribu-
tion to model performance. Most previous works on pruning-based
approximation can be roughly divided into two categories: unstructured
pruning and structured pruning.
Prior works on weight pruning [4,5] achieve high non-structured
sparsity of pruned models by removing single parameters in a fil-
ter. Guo et al. [4] and Hal et al. [5] used magnitude-based pruning
methods, which eliminate weights with the smallest magnitude. Guo
et al. [4] proposed dynamic network surgery to reduce the network"
"sparsity of pruned models by removing single parameters in a fil-
ter. Guo et al. [4] and Hal et al. [5] used magnitude-based pruning
methods, which eliminate weights with the smallest magnitude. Guo
et al. [4] proposed dynamic network surgery to reduce the network
complexity by making on-the-fly connection pruning. Hal et al. [5]
pruned low-weight connections to reduce the storage and computation
demands by an order of magnitude. Some pruning research groups
utilize first-order or second-order derivatives of the loss function with
respect to the weights [6,7]. Hassibi et al. [6] proposed Optimal Brain
Damage (OBD), which uses all second-order derivatives of the loss
function to prune single non-essential weights. Optimal Brain Surgeon
(OBS) [7] have optimized the OBD method, which considers the condi-
tion that the Hessian matrix is non-diagonal. These approaches show
attractive theoretical performance improvement but are difficult to
be supported by existing software and hardware. Unstructured sparse"
"(OBS) [7] have optimized the OBD method, which considers the condi-
tion that the Hessian matrix is non-diagonal. These approaches show
attractive theoretical performance improvement but are difficult to
be supported by existing software and hardware. Unstructured sparse
models require specific matrix multiplication calculations and stor-
age formats, which can hardly leverage existing high-efficiency BLAS
libraries.
Unlike the early efforts on unstructured pruning that may cause
irregular calculation patterns, structured pruning reduces redundant
computations on unimportant filters or channels to produce a struc-
tured sparse model. The corresponding feature maps can be deleted as
the filters are pruned. Therefore, much recent work has focused on filter
pruning methods. SFP [2] and ASFP [8] dynamically pruned the filters
in a soft manner, which zeroizes the unimportant filters and keeps
updating them in the training stage. Li et al. [9] presented a fusion-"
"the filters are pruned. Therefore, much recent work has focused on filter
pruning methods. SFP [2] and ASFP [8] dynamically pruned the filters
in a soft manner, which zeroizes the unimportant filters and keeps
updating them in the training stage. Li et al. [9] presented a fusion-
catalyzed filter pruning approach, which simultaneously optimizes the
parametric and non-parametric operators. Luo et al. [10] pruned filters
based on statistics information computed from its next layer. The filters
of different layers may have different influences on model prediction. Li
et al. [11] proposed a flexible-rate filter pruning approach, FlexPruner,
which automatically selects the number of filters to be pruned for
each layer. Plochaet et al. [12] introduced a hardware-aware pruning
method with the goal of decreasing the inference time for FPGA deep
learning accelerators, adaptively pruning the neural network based on
the size of the systolic array used to calculate the convolutions. To"
"each layer. Plochaet et al. [12] introduced a hardware-aware pruning
method with the goal of decreasing the inference time for FPGA deep
learning accelerators, adaptively pruning the neural network based on
the size of the systolic array used to calculate the convolutions. To
preserve the robustness at a high sparsity ratio in structured pruning,
Zhuang et al. [13] proposed an effective filter importance criterion to
evaluate the importance of filters by estimating their contribution to
the adversarial training loss. Besides, some researchers have found
the value of network pruning in discovering the network architecture
[14,15]. Liu et al. [14] demonstrated that in some cases pruning can
be useful as an architecture search paradigm. Li et al. [15] proposed
a random architecture search to find a good architecture given a
pre-defined model by channel pruning. Li et al. [16] proposed an end-
to-end channel pruning method to search out the desired sub-network"
"be useful as an architecture search paradigm. Li et al. [15] proposed
a random architecture search to find a good architecture given a
pre-defined model by channel pruning. Li et al. [16] proposed an end-
to-end channel pruning method to search out the desired sub-network
automatically and efficiently, which learns per-layer sparsity through
depth-wise binary convolution. Ding et al. [17] presented a neural
architecture search with pruning method, which derives the most po-
tent model by removing trivial and redundant edges from the whole
neural network topology. The structured sparse model can be perfectly
supported by existing libraries to achieve a realistic acceleration. In
this paper, we adopt filter pruning to realize practical performance
improvement for neural network models.
2.2. Staging-based approximation
Prior studies [18,19] found that the difficulty of classifying an image
in real-world scenarios is diverse. The easy samples can be classified"
"this paper, we adopt filter pruning to realize practical performance
improvement for neural network models.
2.2. Staging-based approximation
Prior studies [18,19] found that the difficulty of classifying an image
in real-world scenarios is diverse. The easy samples can be classified
with low effort, and difficult samples consume more computation ef-
forts for prediction. Staging-based approximate strategies, such as early
exiting [18] and layer skipping [20], emerge as a prominent important
technique for separating the classification of easy and hard inputs.
The original neural network uses a fixed computation process for the
prediction of all samples. Staging-based approximate strategies perform
adaptive computing for samples according to conditions at run-time.
Teerapittayanon et al. [18] demonstrated that the deep neural network
with additional side branch classifiers can both improve accuracy and
significantly reduce the inference time of the network. Panda et al. [19]"
"adaptive computing for samples according to conditions at run-time.
Teerapittayanon et al. [18] demonstrated that the deep neural network
with additional side branch classifiers can both improve accuracy and
significantly reduce the inference time of the network. Panda et al. [19]
proposed Conditional Deep Learning cascading a linear network for
each convolutional layer and monitoring the output of the linear net-
work to decide whether classification can be terminated at the current
stage or not. Fang et al. [21] presented an input-adaptive framework
for video analytics, which adopts an architecture search-based scheme
to find the optimal architecture for each early exit branch. Wang
JournalofSystemsArchitecture143(2023)1029782G. Li et al.
et al. [22] designed dynamic layer-skipping mechanisms, which sup-
press unnecessary costs for easy samples and halt inference for all
samples to meet resource constraints for the inference of more compli-"
"JournalofSystemsArchitecture143(2023)1029782G. Li et al.
et al. [22] designed dynamic layer-skipping mechanisms, which sup-
press unnecessary costs for easy samples and halt inference for all
samples to meet resource constraints for the inference of more compli-
cated CNN backbones. Figurnov et al. [23] studied early termination in
each residual unit of ResNets. Farhadi et al. [23] implemented an early-
exiting method on the FPGA platform using partial reconfiguration to
reduce the amount of needed computation. Jayakodi et al. [24] used
Bayesian Optimization to configure the early exit neural networks to
trade off accuracy and energy. To reduce unnecessary intermediate
calculations in the inference process of Branchynet, Liang et al. [25]
directly determined the exit position of the sample in the multi-branch
network according to the difficulty of the sample without intermediate
trial errors. Jo et al. [26] proposed a low-cost early exit network, which"
"calculations in the inference process of Branchynet, Liang et al. [25]
directly determined the exit position of the sample in the multi-branch
network according to the difficulty of the sample without intermediate
trial errors. Jo et al. [26] proposed a low-cost early exit network, which
significantly improves energy efficiencies by reducing the parameters
used in inference with efficient branch structures.
In this paper, we
achieve a multi-stage approximate model by early exiting to accelerate
model inference for input samples in real-world scenarios.
2.3. Design space exploration
Design space exploration (DSE) is a systematic analysis method,
which searches for the optimal solutions in a large design space accord-
ing to the requirements. For example, in the staging-based approximate
strategy, deciding whether or not an exit branch should be inserted at
some position in the middle of the neural network model, and how
the thresholds for each exit point should be set can be seen as a DSE"
"ing to the requirements. For example, in the staging-based approximate
strategy, deciding whether or not an exit branch should be inserted at
some position in the middle of the neural network model, and how
the thresholds for each exit point should be set can be seen as a DSE
problem. Panda et al. [19] and Teerapittayanon et al. [18] empirically
set the location and threshold for each exit in the conditional neural
network model. Jayakodi et al. [24] found the best thresholds via
Bayesian Optimization for the specified trade-off between accuracy
and energy consumption of inference. Park et al. [27] systematically
determined the locations and thresholds of exit branches by genetic
algorithm. Park et al. [28] integrated the once-for-all technique and
BPNet, which consider architectures of base network and exit branches
simultaneously in the same search process. Besides, the fine-grained fil-
ter pruning, that is assign reasonable pruning rates for different layers,"
"algorithm. Park et al. [28] integrated the once-for-all technique and
BPNet, which consider architectures of base network and exit branches
simultaneously in the same search process. Besides, the fine-grained fil-
ter pruning, that is assign reasonable pruning rates for different layers,
can also be considered as a classic DSE problem. Li et al. [11] proposed
a flexible-rate filter pruning method, which selects the filters to be
pruned with a greedy-based strategy. He et al. [29] sampled design
space using reinforcement learning, which performs customizing prun-
ing for each layer, thus improving model compression. Qian et al. [30]
proposed a hierarchical threshold pruning method, which considers
the filter importance within relatively redundant layers instead of all
layers, achieving layerwise pruning for a better network structure. In
this paper, we regard the configuration parameters of staging-based
and pruning-based approximate strategies as the whole design space"
"the filter importance within relatively redundant layers instead of all
layers, achieving layerwise pruning for a better network structure. In
this paper, we regard the configuration parameters of staging-based
and pruning-based approximate strategies as the whole design space
and employ a genetic algorithm(GA)-based DSE to automatically find
the (near-)optimal configuration to effectively combine them, achieving
efficient on-device inference. In the future, we will consider setting
reasonable pruning rates for different layers.
2.4. Motivation
The pruning-based approximate strategy focuses on compressing the
model, which reduces the computation costs by deleting unimportant
parameters in the model, so how to set the pruning rate needs to be
considered. The staging-based approximate strategy concentrates on
improving the execution speed of the model, which allows the inference
of most simple samples to terminate with a good prediction in the"
"parameters in the model, so how to set the pruning rate needs to be
considered. The staging-based approximate strategy concentrates on
improving the execution speed of the model, which allows the inference
of most simple samples to terminate with a good prediction in the
earlier stage by attaching multiple exits in the original model. How
to place the exits and how to set a threshold for each exit should
be considered for the design of a staging-based approximate strategy.
Combining different approximate strategies will involve more configu-
ration parameters and the approximate strategies may affect each other,
which potentially influences the effect of the model optimization.
Fig. 1. The optimization effect for ResNet-56 using different configuration parameters
under the specified accuracy requirement.
Fig. 1 shows the optimization effect of the ResNet-56 using different
configuration parameters under the specified requirements of accuracy"
"Fig. 1. The optimization effect for ResNet-56 using different configuration parameters
under the specified accuracy requirement.
Fig. 1 shows the optimization effect of the ResNet-56 using different
configuration parameters under the specified requirements of accuracy
on the CIFAR-10 dataset, where the triples (𝑥, 𝑦, 𝑧) represent the number
of stages, stage threshold, and pruning rate, respectively. Fig. 1(a), (b),
and (c) show the computational costs (normalized to the computational
cost of the baseline model) of various optimization configurations
when the accuracy is 98.1%, 98.7%, and 98.8% (normalized to the
accuracy of the baseline model). In practice, a certain error can be
allowed in model accuracy (±0.001), for example, 98.09%, and 98.12%
both meet the requirement of 98.1%. The relationship between the
number of stages and the computational cost is not regular, which
will be affected by the stage threshold and pruning rate, for example,"
"allowed in model accuracy (±0.001), for example, 98.09%, and 98.12%
both meet the requirement of 98.1%. The relationship between the
number of stages and the computational cost is not regular, which
will be affected by the stage threshold and pruning rate, for example,
in Fig. 1(c), the computational cost of (3,0.08,0) with more stages is
larger than (2,0.1,0.1), the computational cost of (2,0.1,0.1) with fewer
stages is larger than (3,0.2,0.1). Besides, affected by the staging-based
optimization, the computational costs of the optimized model at a high
pruning rate may be larger than that at low pruning rates, for example,
in Fig. 1(b), the configuration of (2,0.08,0.3) with a pruning rate of 0.3
has more computational costs than (3,0.2,0.2) with a pruning rate of
0.2. In Fig. 1(a), we can observe from the partial experimental results
that at the accuracy requirement of 98.1%, the computation of the
optimized models using three stages is less than that of the model using"
"has more computational costs than (3,0.2,0.2) with a pruning rate of
0.2. In Fig. 1(a), we can observe from the partial experimental results
that at the accuracy requirement of 98.1%, the computation of the
optimized models using three stages is less than that of the model using
two stages. But this law does not apply to the optimization effect of
other accuracy requirements such as 98.7% and 98.8%. It is observed
that the optimization effects of different configuration parameters are
distinct and irregular under the specified accuracy requirement, and
thus it is difficult to find an optimal model. This example shows that
JournalofSystemsArchitecture143(2023)1029783G. Li et al.
Fig. 2. The optimization effect of staging-based strategy, pruning-based strategy, and
CoAxNN for ResNet-56 on the CIFAR-10.
it is challenging to combine different approximate strategies to achieve
efficient optimization for neural network models.
In this paper, for a specified accuracy requirement, we focus on"
"CoAxNN for ResNet-56 on the CIFAR-10.
it is challenging to combine different approximate strategies to achieve
efficient optimization for neural network models.
In this paper, for a specified accuracy requirement, we focus on
combining the principles of different approximate strategies to con-
struct a design space and automatically search for reasonable con-
figuration parameters, giving full play to the advantages of different
approximate strategies to achieve efficient optimization for neural net-
work models. As shown in Fig. 2, at the accuracy requirement of
99.6%, for the staging-based optimization strategy, the two stages are
used and the threshold is set to 0.07 for each stage, the normalized
computational cost is 0.89. For the pruning-based optimization, the
pruning rate is set to 0.1, and the normalized computational cost is
0.89. CoAxNN effectively combines pruning-based and staging-based
strategies, whose computational cost is 0.64, greatly improving the
computational performance."
"computational cost is 0.89. For the pruning-based optimization, the
pruning rate is set to 0.1, and the normalized computational cost is
0.89. CoAxNN effectively combines pruning-based and staging-based
strategies, whose computational cost is 0.64, greatly improving the
computational performance.
3. Methodology
3.1. Overview
In this paper, we propose an efficient optimization framework for
neural network models, CoAxNN, which automatically searches for
reasonable configuration parameters through a GA-based DSE. CoAxNN
effectively combines staging-based with pruning-based approximate
strategies to make full use of the superiority of both, thereby improving
the actual performance while meeting the accuracy requirements for
neural network models.
The overview of the CoAxNN is shown in Fig. 3. First, for the
original deep neural network model, CoAxNN performs staging-based
and pruning-based approximate strategies according to the genes of
the chromosome for each individual, which generates a compressed"
"neural network models.
The overview of the CoAxNN is shown in Fig. 3. First, for the
original deep neural network model, CoAxNN performs staging-based
and pruning-based approximate strategies according to the genes of
the chromosome for each individual, which generates a compressed
multi-stage model. According to the availability of stages in the gene,
CoAxNN attaches exit branches to the original model to build a multi-
stage conditional activation model. According to the threshold of each
stage, CoAxNN predicts input samples, having distinct difficulties, by
multiple stages with different computational complexities, with the
entropy-aware activation manner. The obtained multi-stage model is
compressed by removing unimportant filters, thereby further reducing
computational costs. Next, CoAxNN evaluates the fitness of the cor-
responding individual according to the accuracy and latency of the
compressed multi-stage model and sorts the individuals according to"
"compressed by removing unimportant filters, thereby further reducing
computational costs. Next, CoAxNN evaluates the fitness of the cor-
responding individual according to the accuracy and latency of the
compressed multi-stage model and sorts the individuals according to
their fitness. Then, the chromosome pool is updated, generating the
next generation of individuals. After the evolution of multiple gener-
ations, which repeat the above steps, we can obtain several individuals
that have optimal performance.
3.2. Staging-based approximate optimization
In general, executing a neural network model is a one-staged ap-
proach, which processes all the inputs in the same manner, i.e., starting
from the input operator and performing it operator by operator until
the final exit operator. Prior studies [19] found that classification
difficulty varies widely across inputs in real-world scenarios. Different
computational complexities need to be considered when predicting in-"
"from the input operator and performing it operator by operator until
the final exit operator. Prior studies [19] found that classification
difficulty varies widely across inputs in real-world scenarios. Different
computational complexities need to be considered when predicting in-
puts. Most of the input samples can be correctly classified by employing
a part of a neural network, without the computation effort of the
entire neural network. Early exiting strategy often comes into play,
which allows simple inputs to exit early with a good prediction by the
addition of multiple exit points. By leveraging the early exiting strategy,
CoAxNN achieves a staging-based approximation to give an early exit-
ing opportunity for simple inputs. We denote a neural network model
as = {𝑓1, 𝑓2, … , 𝑓𝑚} which consists of 𝑚 operators. In CoAxNN, a
multi-stage model, ∗, can be formalized as follows:
∗ =
𝜏
⋃
𝑖=0
𝑖
(1)
where 𝜏 is the number of stages.
The "
"ing opportunity for simple inputs. We denote a neural network model
as = {𝑓1, 𝑓2, … , 𝑓𝑚} which consists of 𝑚 operators. In CoAxNN, a
multi-stage model, ∗, can be formalized as follows:
∗ =
𝜏
⋃
𝑖=0
𝑖
(1)
where 𝜏 is the number of stages.
The
𝑖 is an approximate model with the staging-based strategy,
which can be formalized as:
𝑖 =
{
𝑖 +
𝑖 +
, 𝑖 = 𝜏
𝑖, 1 ⩽ 𝑖 < 𝜏
(2)
𝑖 = {𝑓1, 𝑓2, … , 𝑓𝑝𝑖
where
} represents a part of the original neural
network with 𝑝𝑖 operators,
, … , 𝑓 ∗
, 𝑓 ∗
} represents an addi-
𝑏𝑖
2
tional exit branch with 𝑏𝑖 operators, and
𝑖 = {𝑐𝑖, 𝜀𝑖} represents an exit
checker, containing a threshold 𝜀𝑖 and a conditional activation operator
𝜏 = denotes the original (main)
𝑐𝑖 using threshold 𝜀𝑖. Especially,
neural network model.
𝑖 = {𝑓 ∗
1
It is non-trivial to design a staging-based approximate strategy
for adaptive conditional inference of a multi-stage model, and the
following factors need to be considered:
• Number of "
"𝜏 = denotes the original (main)
𝑐𝑖 using threshold 𝜀𝑖. Especially,
neural network model.
𝑖 = {𝑓 ∗
1
It is non-trivial to design a staging-based approximate strategy
for adaptive conditional inference of a multi-stage model, and the
following factors need to be considered:
• Number of
𝒊. A multi-stage model with arbitrary exits can be built
by stage availability. However, fewer exits cannot cover the diverse
difficulty of classification of input samples, whereas too many exits
increase the latency of hard samples that do not exit early.
• Selection of Attached Position (𝒑𝒊) for
𝒊. The exits at a more for-
mer position cannot provide satisfactory accuracy, while redundant
computation may be involved at a more latter exit. Besides, attached
exit branches may also interfere with a variety of computational
graph optimization methods, such as operator fusion and mem-
ory reuse, provided by the deep learning frameworks, increasing
operation counts, data movement, and other system overheads."
"exit branches may also interfere with a variety of computational
graph optimization methods, such as operator fusion and mem-
ory reuse, provided by the deep learning frameworks, increasing
operation counts, data movement, and other system overheads.
• Confidence Threshold (𝜺𝒊) of
𝒊. The confidence threshold is used
to determine whether the prediction result of stage
𝑖 has sufficient
confidence. With a higher threshold, complex samples may finish
predictions from the previous exits with lower accuracy, and using
a lower threshold, simple samples may use more complex compu-
tations to complete inference, due to cannot end from the previous
classifiers, incurring additional computational overheads.
• Structure Design for
𝒊. The structure of each exit branch (
𝑖)
is not identical. Each
, 𝑓 ∗
𝑖 consists of several operators ({𝑓 ∗
, … ,
2
1
𝑓 ∗
𝑏𝑖−1}) used for feature extraction and a linear classifier 𝑓 ∗
. Feature
𝑏𝑖
extraction operators receive the intermediate feature map from 𝑓𝑝𝑖"
"• Structure Design for
𝒊. The structure of each exit branch (
𝑖)
is not identical. Each
, 𝑓 ∗
𝑖 consists of several operators ({𝑓 ∗
, … ,
2
1
𝑓 ∗
𝑏𝑖−1}) used for feature extraction and a linear classifier 𝑓 ∗
. Feature
𝑏𝑖
extraction operators receive the intermediate feature map from 𝑓𝑝𝑖
and extract more high-level features in the form required by a
subsequent linear classifier. The configuration and complexity of the
intermediate feature maps for different depths of the main neural
networks are varying, making the design of
𝑖 arduous. The 𝑓 ∗
𝑏𝑖
operator is used to produce classification results based on the output
of 𝑓 ∗
is different at
each
𝑖.
𝑏𝑖−1, and the number of input feature maps for 𝑓 ∗
𝑏𝑖
To effectively utilize the early-exiting method to build an approxi-
mate multi-stage model, our approach carefully designs principles for
each module.
• Setting of Number (𝜏) and Attached Position (𝒑𝒊) of
𝒊. The
number (𝜏) and the position (𝒑𝒊) of the exit branches (
𝒊) are two"
"𝑏𝑖
To effectively utilize the early-exiting method to build an approxi-
mate multi-stage model, our approach carefully designs principles for
each module.
• Setting of Number (𝜏) and Attached Position (𝒑𝒊) of
𝒊. The
number (𝜏) and the position (𝒑𝒊) of the exit branches (
𝒊) are two
factors that will affect each other. Some unnecessary exits may
be inserted, having little improvement in accuracy but leading to
non-negligible computational overheads, when the number of exit
branches is large. When the position of the exit branch is not
JournalofSystemsArchitecture143(2023)1029784G. Li et al.
Fig. 3. Overview of CoAxNN.
• Structure Design for
reasonable, and cannot distinguish the difficulty of the sample, it
is difficult to increase the number of exit branches to reduce the
computational cost while meeting the model accuracy requirement.
To address this problem, CoAxNN puts the availability of each stage
into the design space of the GA, and each available stage corresponds"
"is difficult to increase the number of exit branches to reduce the
computational cost while meeting the model accuracy requirement.
To address this problem, CoAxNN puts the availability of each stage
into the design space of the GA, and each available stage corresponds
to a new exit branch. The availability of the stage can control the
number and position of exit branches at the same time. Besides,
to introduce fewer new model structures and preserve the existing
graph optimizations, CoAxNN chooses to attach exit branches
𝑖 at
the end of the group of building blocks. It is noted that CoAxNN does
not attach the exit branch after the last group of building blocks, as
there is already an existing original exit for the original backbone.
𝒊. We introduce a feature extractor and
a linear classifier for each exit branch
𝑖. The structure of the
feature extractor is designed with the building block as granularity.
This design not only retains the original neural network structure"
"𝒊. We introduce a feature extractor and
a linear classifier for each exit branch
𝑖. The structure of the
feature extractor is designed with the building block as granularity.
This design not only retains the original neural network structure
but also provides more opportunities for system-level optimizations.
Generally, operators for feature extraction also contain non-linear
activation operators such as rectified linear units, and normalization
operators such as batch normalization. Besides, prior studies [24]
revealed that the output feature maps of operators at shallow depths
of a neural network have a relatively large height and width, which
results in a large number of input feature maps being passed to the
linear classifier of former exits, thus leading to a long latency for
easy samples that exit early. As such, in CoAxNN, we add an extra
pooling operator after the last feature extraction operator of shallow
𝑖.
• Confidence Measure in
𝒊. The
𝑖. A reliable "
"linear classifier of former exits, thus leading to a long latency for
easy samples that exit early. As such, in CoAxNN, we add an extra
pooling operator after the last feature extraction operator of shallow
𝑖.
• Confidence Measure in
𝒊. The
𝑖. A reliable
𝑖 takes a threshold checking step,
which determines whether an input returns from the current exit
or continues to the next exit according to the prediction result
of
𝑖 should have the ability to identify whether
the classification results are sufficiently confident. There are var-
ious methods [31], including maximum probability, entropy, and
margin, for the design of
𝑖. Prior work [24] has demonstrated
the performance of the aforementioned three confidence types is
almost identical. CoAxNN chooses to use the entropy of predicted
probability as the entropy-aware activation operator (𝑐𝑖) to evaluate
the confidence of the prediction result for the input sample (𝑥) of
the 𝑖th stage classifier, as follows:
entropy( ̂𝑦𝑖) =
𝐶
∑
𝑐=1"
"almost identical. CoAxNN chooses to use the entropy of predicted
probability as the entropy-aware activation operator (𝑐𝑖) to evaluate
the confidence of the prediction result for the input sample (𝑥) of
the 𝑖th stage classifier, as follows:
entropy( ̂𝑦𝑖) =
𝐶
∑
𝑐=1
̂𝑦𝑖(𝑐) log ̂𝑦𝑖(𝑐)
(3)
where ̂𝑦𝑖 is the probability distribution of the output of the linear
classifier 𝑓 ∗
on different classification labels, calculated by the soft-
𝑏𝑖
max operator, and 𝐶 is the number of classes. An entropy threshold
𝜀𝑖 is used to decide whether an input returns the prediction of the
current exit or activates the latter operators. A higher confidence
value implies that the input sample that arrived at the current exit is
hard and needs to be processed by a more complex stage to complete
accurate classification.
3.3. Pruning-based approximate optimization
In addition to the staging-based approximate strategy that pro-
vides adaptive computing based on conditional activation at runtime,"
"hard and needs to be processed by a more complex stage to complete
accurate classification.
3.3. Pruning-based approximate optimization
In addition to the staging-based approximate strategy that pro-
vides adaptive computing based on conditional activation at runtime,
CoAxNN also integrates a pruning-based approximate strategy to com-
press model size. The neural network pruning technique has been
widely studied by researchers, which can be broadly categorized as
structured and unstructured pruning. Structured pruning such as filter
pruning has higher computational efficiency than unstructured prun-
ing [32]. Especially, filter pruning is employed, which not only deletes
redundant computations of unimportant filters but also leads to the
removal of corresponding feature maps, providing realistic performance
improvements. In CoAxNN, we utilize the filter pruning method to
compress the multi-stage model and quantify the importance of each
filter in a convolutional operator based on the 𝓁2-norm:
‖
‖
‖2 ="
"removal of corresponding feature maps, providing realistic performance
improvements. In CoAxNN, we utilize the filter pruning method to
compress the multi-stage model and quantify the importance of each
filter in a convolutional operator based on the 𝓁2-norm:
‖
‖
‖2 =
𝑟‖
√
√
√
√
√
𝑘
∑
𝑚
∑
𝑛
∑
𝑡=1
𝑖=1
𝑗=1
𝑤2
𝑡,𝑖,𝑗
(4)
where
𝑟 indicates the 𝑟th filter in a convolutional operator, 𝑤𝑡,𝑖,𝑗
denotes an element of
𝑟 that resides in the 𝑖th row and 𝑗th column
in the 𝑡th channel, 𝑘 denotes the input channels, 𝑚 denotes the height
of filters, and 𝑛 denotes the width of filters. The filters with smaller 𝓁2-
norm will be given higher priority to be pruned than those of higher
𝓁2-norm. To keep the model capacity and minimize the loss of accuracy
as much as possible, we utilize a dynamic pruning scheme [2] for
staging-based approximate CNNs, which zeroes the pruned filters and
keeps updating them in the re-training process.
3.4. Training of CoAxNN"
"𝓁2-norm. To keep the model capacity and minimize the loss of accuracy
as much as possible, we utilize a dynamic pruning scheme [2] for
staging-based approximate CNNs, which zeroes the pruned filters and
keeps updating them in the re-training process.
3.4. Training of CoAxNN
Joint training trains all classifiers in a neural network model at
the same time, which is widely used in the training process of neural
network models with exit branches [18,27]. It defines a loss function
for each classifier and minimizes the weighted sum of loss functions
for all classifiers during training. Therefore, each classifier provides
regularization for others to alleviate the overfitting of the model.
CoAxNN utilizes joint training optimization to train the backbone
neural network and exit branches at the same time and minimize the
weighted sum of the cross-entropy loss functions of all stages, denoted
as follows:
joint =
𝜏
∑
𝑖=1
𝜆𝑖
CE(𝑦, ̄𝑦𝑖)
(5)"
"CoAxNN utilizes joint training optimization to train the backbone
neural network and exit branches at the same time and minimize the
weighted sum of the cross-entropy loss functions of all stages, denoted
as follows:
joint =
𝜏
∑
𝑖=1
𝜆𝑖
CE(𝑦, ̄𝑦𝑖)
(5)
where 𝜆𝑖 represents the weight of the loss function of the 𝑖th stage,
𝑦 is the real classification of 𝑥 which is shared by all stages, ̄𝑦𝑖 is the
output of linear classifier 𝑓 ∗
of the 𝑖th stage, and the cross-entropy loss
𝑏𝑖
function
CE is calculated as follows:
CE(𝑦, ̄𝑦𝑖) = −
𝐶
∑
𝑐=1
𝑦(𝑐) log
e ̄𝑦𝑖(𝑐)
𝑗=1 e ̄𝑦𝑖(𝑗)
∑𝐶
(6)
JournalofSystemsArchitecture143(2023)1029785G. Li et al.
The training process of CoAxNN is summarized in Algorithm 1.
It is given training dataset , training epochs 𝑒𝑝𝑜𝑐ℎ𝑚𝑎𝑥, batch size
𝜌, original deep neural network model , number of stages 𝜏, the
weights for loss functions of all stages 𝜆, and the chromosome pool
𝑃 . First, based on the genes on the chromosome, CoAxNN performs a"
"It is given training dataset , training epochs 𝑒𝑝𝑜𝑐ℎ𝑚𝑎𝑥, batch size
𝜌, original deep neural network model , number of stages 𝜏, the
weights for loss functions of all stages 𝜆, and the chromosome pool
𝑃 . First, based on the genes on the chromosome, CoAxNN performs a
staging-based optimization strategy, which approximates the original
neural network model as a multi-stage conditional activation model
by attaching exit branches (Lines 1–11). Then, the generated multi-
stage model is initialized randomly (Lines 12–13). Next, the model
is compressed and tuned according to the training data and the
gene of the pruning rate (𝑃 [𝑝].𝑝𝑟𝑢𝑛𝑖𝑛𝑔_𝑟𝑎𝑡𝑒) on the chromosome in
𝑒𝑝𝑜𝑐ℎ𝑚𝑎𝑥 epochs (Lines 14–34). In each epoch, CoAxNN calculates the
loss function according to Eq. (5) and updates the weights by the
traditional backpropagation algorithm (Lines 15–26). Besides, for each
convolutional operator in the approximate multi-stage model, CoAxNN
obtains the number of filters (𝑡) and calculates the 𝓁2-norm of each"
"loss function according to Eq. (5) and updates the weights by the
traditional backpropagation algorithm (Lines 15–26). Besides, for each
convolutional operator in the approximate multi-stage model, CoAxNN
obtains the number of filters (𝑡) and calculates the 𝓁2-norm of each
filter according to Eq. (4), then the dynamic pruning scheme is used
to prune ⌊𝑡 × 𝑃 [𝑝].𝑝𝑟𝑢𝑛𝑖𝑛𝑔_𝑟𝑎𝑡𝑒⌋ filters with low 𝓁2-norm (Lines 27–33).
The pruned filters can be updated once it is found to be important at
any time, thus maintaining the learning ability of the model. In the
pruning process of each epoch, CoAxNN will reorder the importance
of filters for each convolutional operator, and select the filters to be
pruned. Finally, the trained model ′ is obtained (Line 36).
Algorithm 1: CoAxNN training
Input: training data: , training epoch: 𝑒𝑝𝑜𝑐ℎ𝑚𝑎𝑥, batch size: 𝜌,
original model backbone: , the number of stages: 𝜏, the
weights for loss functions: 𝜆, chromosomes: 𝑃
Output: trained models: ′
1 for 𝑝 = 1 → 𝑃 .𝑠𝑖𝑧𝑒() do"
"Algorithm 1: CoAxNN training
Input: training data: , training epoch: 𝑒𝑝𝑜𝑐ℎ𝑚𝑎𝑥, batch size: 𝜌,
original model backbone: , the number of stages: 𝜏, the
weights for loss functions: 𝜆, chromosomes: 𝑃
Output: trained models: ′
1 for 𝑝 = 1 → 𝑃 .𝑠𝑖𝑧𝑒() do
// Generate model structure
′[𝑝] = ;
for 𝑖 = 1 → 𝜏 − 1 do
if P[p][i] is available then
𝑖 from ;
Construct
𝑖 according to
Construct
Construct
𝑖 according to 𝑃 [𝑝][𝑖].𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑;
𝑖 +
𝑖 +
𝑖 =
𝑖;
′[𝑝] = ′[𝑝] ∪
𝑖;
𝑖;
end
end
// Tune and prune model parameters
′[𝑝] = LoadModel( ′[𝑝], 𝑖𝑛𝑖𝑡𝑎𝑙_𝑤𝑒𝑖𝑔ℎ𝑡𝑠);
𝑡𝑟𝑎𝑖𝑛_𝑏𝑎𝑡𝑐ℎ𝑒𝑠 = make_batch(, 𝜌);
for 𝑒𝑝𝑜𝑐ℎ = 1 → 𝑒𝑝𝑜𝑐ℎ𝑚𝑎𝑥 do
foreach (𝑖𝑛𝑝𝑢𝑡, 𝑡𝑎𝑟𝑔𝑒𝑡) ∈ 𝑡𝑟𝑎𝑖𝑛_𝑏𝑎𝑡𝑐ℎ𝑒𝑠 do
𝑜𝑢𝑡𝑝𝑢𝑡 = ′[𝑝].forward(𝑖𝑛𝑝𝑢𝑡);
𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑_𝑙𝑜𝑠𝑠 ← 0;
for 𝑖 = 1 → 𝜏 do
if P[p][i] is available then
𝑙𝑜𝑠𝑠 = CrossEntropy(𝑜𝑢𝑡𝑝𝑢𝑡[𝑖], 𝑡𝑎𝑟𝑔𝑒𝑡);
𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑_𝑙𝑜𝑠𝑠 += 𝜆[𝑖] × 𝑙𝑜𝑠𝑠;
end
end
𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑_𝑙𝑜𝑠𝑠 = 𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑_𝑙𝑜𝑠𝑠∕𝑠𝑢𝑚(𝜆);
′[𝑝].backward(𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑_𝑙𝑜𝑠𝑠);
end
foreach 𝑓 ∈ ′[𝑝] do
if 𝑓 .𝑡𝑦𝑝𝑒 == 𝐶𝑂𝑁𝑉 then"
"𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑_𝑙𝑜𝑠𝑠 ← 0;
for 𝑖 = 1 → 𝜏 do
if P[p][i] is available then
𝑙𝑜𝑠𝑠 = CrossEntropy(𝑜𝑢𝑡𝑝𝑢𝑡[𝑖], 𝑡𝑎𝑟𝑔𝑒𝑡);
𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑_𝑙𝑜𝑠𝑠 += 𝜆[𝑖] × 𝑙𝑜𝑠𝑠;
end
end
𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑_𝑙𝑜𝑠𝑠 = 𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑_𝑙𝑜𝑠𝑠∕𝑠𝑢𝑚(𝜆);
′[𝑝].backward(𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑_𝑙𝑜𝑠𝑠);
end
foreach 𝑓 ∈ ′[𝑝] do
if 𝑓 .𝑡𝑦𝑝𝑒 == 𝐶𝑂𝑁𝑉 then
𝑡 ← the filters number of 𝑓 ;
Calculate the 𝓁2-norm for the filters;
Zeroize the lowest filters ⌊𝑡 × 𝑃 [𝑝].𝑝𝑟𝑢𝑛𝑖𝑛𝑔_𝑟𝑎𝑡𝑒⌋ filters;
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
3.5. GA-based design space exploration
To effectively combine the staging-based with the pruning-based ap-
proximate strategies, the design space of CoAxNN includes the number
of stages, the position of the stage, the threshold of the stage, and the
pruning rate, which is a very large search space. When the number of
stages is 𝜏, the search space for determining which stage is available
is 2𝜏 , the search space for thresholds is 𝑄𝜏 where 𝑄 is the number"
"of stages, the position of the stage, the threshold of the stage, and the
pruning rate, which is a very large search space. When the number of
stages is 𝜏, the search space for determining which stage is available
is 2𝜏 , the search space for thresholds is 𝑄𝜏 where 𝑄 is the number
of candidate thresholds, and the search space for pruning rate is 𝑅,
which indicates the number of candidate pruning rates. The parameter
configurations are searched independently, making the search space as
large as 2𝜏 × 𝑄𝜏 × 𝑅. It is laborious to explore the large parameter
space by brute force search. CoAxNN adopts the genetic algorithm
for the design space exploration. Genetic algorithm [33] is inspired by
biological evolution based on Charles Darwin’s theory of natural selec-
tion, which is often used to find the (near-)optimal solution in a large
search space. In CoAxNN, the number of genes on each chromosome is
2 × (𝜏 − 1) + 1. For the first 𝜏 − 1 stages, CoAxNN uses two genes, one for"
"biological evolution based on Charles Darwin’s theory of natural selec-
tion, which is often used to find the (near-)optimal solution in a large
search space. In CoAxNN, the number of genes on each chromosome is
2 × (𝜏 − 1) + 1. For the first 𝜏 − 1 stages, CoAxNN uses two genes, one for
whether the stage is available, and the other for the threshold of the
stage. In addition, CoAxNN also uses a gene to represent the pruning
ratio. The fitness of the single individual is represented by a 2-tuple
(𝑎𝑐𝑐𝑢𝑟𝑎𝑐𝑦, 𝑙𝑎𝑡𝑒𝑛𝑐𝑦). GA-based DSE aims to increase accuracy and reduce
latency, finding the (near-)optimal solutions for model performance.
Algorithm 2 shows that how accuracy and latency are evaluated
for individuals. It is given the test dataset , the number of stages 𝜏,
and the chromosome set 𝑃 . For each individual, CoAxNN obtains the
model 𝑛𝑒𝑡 configured with the corresponding gene (Line 4). Then, the
test dataset is predicted by the model, and the result of prediction"
"for individuals. It is given the test dataset , the number of stages 𝜏,
and the chromosome set 𝑃 . For each individual, CoAxNN obtains the
model 𝑛𝑒𝑡 configured with the corresponding gene (Line 4). Then, the
test dataset is predicted by the model, and the result of prediction
𝑜𝑢𝑡𝑝𝑢𝑡 is got (Line 6). For each input sample, CoAxNN traverses all
available stages and calculates the confidence 𝑒 of corresponding output
at this stage according to Eq. (3) (Lines 7–12). If the confidence (𝑒)
is less than the confidence threshold (𝜀𝑖) of this stage, the prediction
is ended, and the accuracy of the sample at this stage is added to
the accuracy score (𝛿[𝑝]) of the current individual (𝑝) (Lines 13–16).
The accuracy function returns 1 if the prediction is correct, and 0
otherwise. When the sample does not exit from the first 𝜏 − 1 stages,
it must be exited from the 𝜏th stage. Therefore, in the 𝜏th stage, the
accuracy is directly added to the accuracy score (𝛿[𝑝]) (Lines 17–19)."
"The accuracy function returns 1 if the prediction is correct, and 0
otherwise. When the sample does not exit from the first 𝜏 − 1 stages,
it must be exited from the 𝜏th stage. Therefore, in the 𝜏th stage, the
accuracy is directly added to the accuracy score (𝛿[𝑝]) (Lines 17–19).
The evaluation of latency is similar to accuracy. CoAxNN evaluates the
latency score (𝜇[𝑝]) using a similar manner as the accuracy score (𝛿[𝑝]),
which accumulates the latency of the backbone neural network and exit
branches until the end of the prediction (Line 8–10). For the latency,
we test the original network with all possible exit branches attached on
the target edge devices. The execution time of all operators is recorded.
Finally, the average accuracy score (𝛿) and average latency score (𝜇) for
all individuals are obtained (Lines 23–26).
GA-based DSE gets the (near-)optimal solutions about the goal of
accuracy and latency. Users choose the (near-)optimal solution among"
"Finally, the average accuracy score (𝛿) and average latency score (𝜇) for
all individuals are obtained (Lines 23–26).
GA-based DSE gets the (near-)optimal solutions about the goal of
accuracy and latency. Users choose the (near-)optimal solution among
them according to their requirements. If the accuracy requirement is
high, the model with the least computation cost is selected under a triv-
ial accuracy loss. If a certain accuracy loss can be tolerated, the model
with greatly less computation cost is selected. Finally, unavailable
branches and unimportant filters are removed to obtain an optimized
neural network model.
4. Evaluation
4.1. Experimental setting
end
end
end
Evaluation Platforms. We conduct optimization with PaddlePaddle,2
an open-sourced deep learning framework, for neural network models
on a server with Intel Xeon CPUs and an Nvidia A100 GPU. We evaluate
35 end
36 return ′;
2 https://www.paddlepaddle.org.cn/en.
JournalofSystemsArchitecture143(2023)1029786G. Li et al."
"an open-sourced deep learning framework, for neural network models
on a server with Intel Xeon CPUs and an Nvidia A100 GPU. We evaluate
35 end
36 return ′;
2 https://www.paddlepaddle.org.cn/en.
JournalofSystemsArchitecture143(2023)1029786G. Li et al.
Algorithm 2: Performance Collection
Input: test data: , the number of stages: 𝜏, chromosomes: 𝑃
Output: accuracy for each configuration of neural network
models: 𝛿, latency for each configuration of neural
network models: 𝜇
1 for 𝑝 = 1 → 𝑃 .𝑠𝑖𝑧𝑒() do
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
𝛿[𝑝] ← 0;
𝜇[𝑝] ← 0;
𝑛𝑒𝑡 = getModel(𝑃 [𝑝]);
foreach (𝑖𝑛𝑝𝑢𝑡, 𝑡𝑎𝑟𝑔𝑒𝑡) ∈ do
𝑜𝑢𝑡𝑝𝑢𝑡 = 𝑛𝑒𝑡.forward(𝑖𝑛𝑝𝑢𝑡);
for 𝑖 = 1 → 𝜏 do
𝜇[𝑝] += computeLatency(
if P[p][i] is available then
𝑖);
𝜇[𝑝] += computeLatency(⋃𝑖
if 𝑖 ≠ 𝜏 then
𝑗 );
𝑗=1
𝑒 ← Compute entropy of 𝑜𝑢𝑡𝑝𝑢𝑡[𝑖];
if 𝑒 < 𝜀𝑖 then
𝛿[𝑝] += accuracy(𝑜𝑢𝑡𝑝𝑢𝑡[𝑖], 𝑡𝑎𝑟𝑔𝑒𝑡);
break;
end
else
𝛿[𝑝] += accuracy(𝑜𝑢𝑡𝑝𝑢𝑡[𝑖], 𝑡𝑎𝑟𝑔𝑒𝑡);
end
end
end
end
𝛿[𝑝] = 𝛿[𝑝]∕.𝑠𝑖𝑧𝑒();"
"𝜇[𝑝] += computeLatency(
if P[p][i] is available then
𝑖);
𝜇[𝑝] += computeLatency(⋃𝑖
if 𝑖 ≠ 𝜏 then
𝑗 );
𝑗=1
𝑒 ← Compute entropy of 𝑜𝑢𝑡𝑝𝑢𝑡[𝑖];
if 𝑒 < 𝜀𝑖 then
𝛿[𝑝] += accuracy(𝑜𝑢𝑡𝑝𝑢𝑡[𝑖], 𝑡𝑎𝑟𝑔𝑒𝑡);
break;
end
else
𝛿[𝑝] += accuracy(𝑜𝑢𝑡𝑝𝑢𝑡[𝑖], 𝑡𝑎𝑟𝑔𝑒𝑡);
end
end
end
end
𝛿[𝑝] = 𝛿[𝑝]∕.𝑠𝑖𝑧𝑒();
𝜇[𝑝] = 𝜇[𝑝]∕.𝑠𝑖𝑧𝑒();
25 end
26 return (𝛿, 𝜇);
the realistic speedup and energy consumption of optimized models on
a representative intelligent edge platform, Jetson AGX Orin, integrated
with Ampere GPUs and Arm Cortex CPUs. For the genetic algorithm,
we adopted the OpenGA [34] and the NSGA-III [35].
Benchmark Datasets and Models. We demonstrate the effectiveness
of our proposed method on the CIFAR [36] dataset and the CINIC-
10 [37] dataset. The CIFAR dataset, which consists of 50,000 images for
training and 10,000 images for testing, contains two datasets: CIFAR-10
and CIFAR-100. The CIFAR-10 and CIFAR-100 datasets are categorized
into 10 and 100 classes, respectively. CINIC-10 consisting of 27 000"
"10 [37] dataset. The CIFAR dataset, which consists of 50,000 images for
training and 10,000 images for testing, contains two datasets: CIFAR-10
and CIFAR-100. The CIFAR-10 and CIFAR-100 datasets are categorized
into 10 and 100 classes, respectively. CINIC-10 consisting of 27 000
images is split into three equal-sized train, validation, and test subsets
and is categorized into 10 classes. We adopt the state-of-the-art residual
neural network (ResNet) [1], which has less redundancy and is more
challenging to be compressed and accelerated than conventional model
structures, as model architectures. ResNet-20/32/56/110 models are
evaluated for the CIFAR-10 dataset, ResNet-56/110 models are evalu-
ated for the CIFAR-100 dataset and ResNet-18/50 models are evaluated
for the CINIC-10 dataset.
Hyper-parameters Setting. For staging-based approximation, we at-
tach exit branches after each residual block by default for building a
multi-stage model, and the weight of the loss function of each stage is"
"for the CINIC-10 dataset.
Hyper-parameters Setting. For staging-based approximation, we at-
tach exit branches after each residual block by default for building a
multi-stage model, and the weight of the loss function of each stage is
set to 1.0 by default. For pruning-based approximation, we follow the
same data argumentation strategies and scheduling settings as [1].
4.2. GA-based design space exploration
The GA-based DSE, taking increasing accuracy and reducing latency
as the goal, evaluates and sorts the solutions in the design space, and
obtains the (near-)optimal solutions about accuracy and latency after
multiple generations of individuals. After the process of survival of
the fittest for multiple generations of individuals, the (near-)optimal
solutions about accuracy and latency are obtained.
Fig. 4 shows solutions, obtained by GA-based DSE, for ResNet-20,
ResNet-32, ResNet-56, and ResNet-110 on the CIFAR-10 dataset. The
𝑥-axis and 𝑦-axis represent the normalized top-1 accuracy and latency,"
"solutions about accuracy and latency are obtained.
Fig. 4 shows solutions, obtained by GA-based DSE, for ResNet-20,
ResNet-32, ResNet-56, and ResNet-110 on the CIFAR-10 dataset. The
𝑥-axis and 𝑦-axis represent the normalized top-1 accuracy and latency,
normalized to the top-1 accuracy and latency of the corresponding
baseline model, respectively. The data, marked by the green dot, are
the design points of the brute-force algorithm, and the data, marked
by the red triangle, are the (near-)optimal results found by CoAxNN.
The optimal solutions found by brute force are plotted by the boundary
of the green and red regions. It can be observed that the (near-
)optimal solutions searched by CoAxNN are close to this boundary,
which demonstrates the effectiveness of CoAxNN. Therefore, CoAxNN
can search for the model having the least computational cost in most
cases and meeting the accuracy requirements by GA-based DSE.
4.3. Performance of optimized models"
"which demonstrates the effectiveness of CoAxNN. Therefore, CoAxNN
can search for the model having the least computational cost in most
cases and meeting the accuracy requirements by GA-based DSE.
4.3. Performance of optimized models
We compare CoAxNN with state-of-the-art optimization methods
such as ASRFP [38]. For the sake of fairness, the accuracy numbers
are directly cited from their original papers. Different hyperparameters,
such as learning rate, are used by distinct optimization methods, so the
accuracy of the baseline model may be slightly different. Therefore,
both the accuracy of the baseline model and the optimized model
are shown in our experimental results, and ‘‘ACC. Drop’’ is used to
represent the accuracy dropping of the model after optimization. A
smaller number of ‘‘ACC. Drop’’ is better, and a negative number
indicates the optimized model has higher accuracy than the baseline
model. This is because model optimization has a regularization effect,"
"represent the accuracy dropping of the model after optimization. A
smaller number of ‘‘ACC. Drop’’ is better, and a negative number
indicates the optimized model has higher accuracy than the baseline
model. This is because model optimization has a regularization effect,
which can reduce the overfitting of neural network models [2,18]. To
avoid interference, we run each experiment three times and report the
mean and standard deviation (mean ±std) of accuracy. Besides, we
employ FLOPs to quantify the computational costs of neural network
models.
4.3.1. ResNets on CIFAR-10
Table 1 shows the accuracy and FLOPs of ResNet-20/32/56/110 on
the CIFAR-10 dataset. CoAxNN reduces the computational complexity
of the original neural network model while meeting the accuracy
requirements. The optimized ResNet-20, ResNet-32, ResNet-56, and
ResNet-110 by CoAxNN achieves the FLOPs reduction from 4.06E7,
6.89E7, 1.25E8, 2.53E8 (refer to Table 2) to 3.00E7, 4.89E7, 8.06E7,"
"of the original neural network model while meeting the accuracy
requirements. The optimized ResNet-20, ResNet-32, ResNet-56, and
ResNet-110 by CoAxNN achieves the FLOPs reduction from 4.06E7,
6.89E7, 1.25E8, 2.53E8 (refer to Table 2) to 3.00E7, 4.89E7, 8.06E7,
1.63E8, reduced by 25.94%, 28.93%, 35.76%, 35.57% in computa-
tional complexity, with the accuracy loss of 0.67%, 0.84%, 0.74%, and
0.63%, respectively. Moreover, CoAxNN can exploit less computation
to achieve top-1 accuracy that is comparable to other state-of-the-art
model optimization methods. For example, ResNet-20 optimized by
SFP demands the computational complexity of 2.43E7 FLOPs while
reducing the top-1 accuracy by 1.37%. The optimized ResNet-20 by
CoAxNN consumes less computation cost, i.e., 2.27E7 FLOPs, drops by
1.39% in top-1 accuracy. CoAxNN reduces the computational cost of
ResNet-32 to 3.44E7 FLOPs with a 1.58% accuracy drop. MIL spends
more computations (4.70E7 FLOPs), reducing the top-1 accuracy by"
"CoAxNN consumes less computation cost, i.e., 2.27E7 FLOPs, drops by
1.39% in top-1 accuracy. CoAxNN reduces the computational cost of
ResNet-32 to 3.44E7 FLOPs with a 1.58% accuracy drop. MIL spends
more computations (4.70E7 FLOPs), reducing the top-1 accuracy by
1.59%. The compressed ResNet-56 by SFP achieves the FLOPs reduction
of 52.60% and the accuracy loss of 1.33%. CoAxNN decreases the
computational cost of ResNet-56 by 54.88% with a 1.22% accuracy
drop. The optimized ResNet-110 by GAL reduces FLOPs by 48.50% with
a 0.81% drop in top-1 accuracy. CoAxNN achieves a similar accuracy
loss (0.88%) while reducing the computational complexity by 62.09%.
For original neural network models, CoAxNN automatically searches
for a reasonable configuration to effectively optimize the computational
complexity while meeting the accuracy requirements. For the same ac-
curacy requirement, CoAxNN reduces more computations than existing
methods, achieving less resource consumption."
"for a reasonable configuration to effectively optimize the computational
complexity while meeting the accuracy requirements. For the same ac-
curacy requirement, CoAxNN reduces more computations than existing
methods, achieving less resource consumption.
We also analyze the FLOPs and the percentage of predicted images
for different stages of optimized ResNet-20, ResNet-32, ResNet-56,
JournalofSystemsArchitecture143(2023)1029787G. Li et al.
Fig. 4. The solutions with GA-based DSE on the CIFAR-10 dataset. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of
this article.)
Table 1
Performance of optimized neural network models on CIFAR-10 (see [39–43]).
Model
Method
Top-1 Acc.
Baseline (%)
Top-1 Acc.
Accelerated (%)
Top-1 Acc.
Drop (%)
ResNet-20
ResNet-32
ResNet-56
ResNet-110
MIL [39]
SFP [2]
FPGM [40]
TAS [41]
CoAxNN (0.67%)
CoAxNN (1.39%)
MIL [39]
SFP [2]
TAS [41]
CoAxNN (0.84%)
CoAxNN (1.58%)
SFP [2]
ASFP [8]
CP [42]
AMC [29]"
"Model
Method
Top-1 Acc.
Baseline (%)
Top-1 Acc.
Accelerated (%)
Top-1 Acc.
Drop (%)
ResNet-20
ResNet-32
ResNet-56
ResNet-110
MIL [39]
SFP [2]
FPGM [40]
TAS [41]
CoAxNN (0.67%)
CoAxNN (1.39%)
MIL [39]
SFP [2]
TAS [41]
CoAxNN (0.84%)
CoAxNN (1.58%)
SFP [2]
ASFP [8]
CP [42]
AMC [29]
CoAxNN (0.74%)
CoAxNN (1.22%)
SFP [2]
ASRFP [38]
TAS [41]
GAL [43]
CoAxNN (0.63%)
CoAxNN (0.88%)
92.49
92.20
92.20
92.88
92.68
92.68
92.33
92.63
93.89
93.56
93.56
93.59
93.59
92.8
92.8
94.15
94.15
93.68
94.33
94.97
93.26
94.42
94.42
91.43
90.83
91.09
90.97
92.01 (±0.43)
91.29 (±0.26)
90.74
90.08
91.48
92.72 (±0.13)
91.98 (±0.41)
92.26
92.44
90.9
91.9
93.41 (±0.05)
92.93 (±0.25)
92.90
93.69
94.33
92.74
93.79 (±0.36)
93.54 (±0.17)
1.06
1.37
1.11
1.91
0.67
1.39
1.59
2.55
2.41
0.84
1.58
1.33
1.15
1.90
0.90
0.74
1.22
0.78
0.67
0.64
0.81
0.63
0.88
#FLOPs
2.61E7
2.43E7
2.43E7
2.19E7
3.00E7
2.27E7
4.70E7
4.03E7
4.08E7
4.89E7
3.44E7
5.94E7
5.94E7
–
6.29E7
8.06E7
5.66E7
1.21E8
1.21E8
1.19E8
–
1.63E8
9.59E7"
"93.54 (±0.17)
1.06
1.37
1.11
1.91
0.67
1.39
1.59
2.55
2.41
0.84
1.58
1.33
1.15
1.90
0.90
0.74
1.22
0.78
0.67
0.64
0.81
0.63
0.88
#FLOPs
2.61E7
2.43E7
2.43E7
2.19E7
3.00E7
2.27E7
4.70E7
4.03E7
4.08E7
4.89E7
3.44E7
5.94E7
5.94E7
–
6.29E7
8.06E7
5.66E7
1.21E8
1.21E8
1.19E8
–
1.63E8
9.59E7
FLOPs ↓
(%)
36.00
42.20
42.20
46.20
25.94
44.02
31.20
41.50
41.00
28.93
49.98
52.60
52.60
50.00
50.00
35.76
54.88
52.30
52.30
53.00
48.50
35.57
62.09
and ResNet-110, with an accuracy loss of 0.67%, 0.84%, 0.74%, and
0.63%, respectively, as shown in the Table 2. Weighted average FLOPs
(‘‘Avg. #FLOPs’’) are computed by exit percentage and exit FLOPs for
each stage (e.g., 3.00E7 = 58.71% × 1.93E7 + 41.29% × 4.53E7),
which indicates the average model performance on the entire dataset.
CoAxNN employs distinct stages for different neural network models.
The two stages are used for ResNet-20, and three stages are used
for more complex ResNet-32, ResNet-56, and ResNet-110. The neural"
"which indicates the average model performance on the entire dataset.
CoAxNN employs distinct stages for different neural network models.
The two stages are used for ResNet-20, and three stages are used
for more complex ResNet-32, ResNet-56, and ResNet-110. The neural
network prediction finished at earlier stages costs less computational
effort. Simple images, making up most of the dataset, are predicted by
the first few stages, which reduces the computational complexity while
ensuring accuracy.
Besides, we show the configurations of the optimized models
searched by the GA-based DSE, as shown in Table 3. The pruning rate,
the number of stages, and the position and the threshold for each stage
are reported. For the optimized ResNet-20, the pruning rate is 0, i.e., no
pruning is performed, the number of stages is two, the position of the
first stage is the end of the fourth residual block, the corresponding
threshold is 0.3, and the second stage refers to the backbone neural"
"are reported. For the optimized ResNet-20, the pruning rate is 0, i.e., no
pruning is performed, the number of stages is two, the position of the
first stage is the end of the fourth residual block, the corresponding
threshold is 0.3, and the second stage refers to the backbone neural
network with no confidence threshold since images must be exited from
JournalofSystemsArchitecture143(2023)1029788G. Li et al.
Table 2
Analysis of optimized models on CIFAR-10.
Model (Acc.Drop)
Stages
CoAxNN
Percentage
#FLOPs
Avg. #FLOPs
Baseline
#FLOPs
FLOPs ↓ (%)
ResNet-20
(0.67%)
ResNet-32
(0.84%)
ResNet-56
(0.74%)
ResNet-110
(0.63%)
1
2
1
2
3
1
2
3
1
2
3
Table 3
Configurations optimized by GA-based DSE for CIFAR-10.
Model (Acc.Drop)
Configurations
ResNet-20
(0.67%)
ResNet-32
(0.84%)
ResNet-56
(0.74%)
ResNet-110
(0.63%)
Rate
Stage
Position
Threshold
Rate
Stage
Position
Threshold
Rate
Stage
Position
Threshold
Rate
Stage
Position
Threshold
0
1
4
0.3
0
1
6
0.09"
"Model (Acc.Drop)
Configurations
ResNet-20
(0.67%)
ResNet-32
(0.84%)
ResNet-56
(0.74%)
ResNet-110
(0.63%)
Rate
Stage
Position
Threshold
Rate
Stage
Position
Threshold
Rate
Stage
Position
Threshold
Rate
Stage
Position
Threshold
0
1
4
0.3
0
1
6
0.09
0
1
10
0.07
0
1
19
0.07
58.71%
41.29%
41.72%
38.78%
19.50%
44.81%
36.79%
18.40%
42.66%
30.93%
26.41%
2
–
–
2
11
0.1
2
19
0.08
2
37
0.015
3
–
–
3
–
–
3
–
–
3
–
–
the last stage. Although ResNet-32, ResNet-56, and ResNet-110 are all
optimized into three stages with a pruning rate of 0, the position and
threshold of each stage are different. For the optimized ResNet-32, the
threshold is 0.09, 0.1 for each stage where the position is the end of
the 6, 11th residual block of the backbone network, respectively. For
the optimized ResNet-56, the position of three stages is the end of the
10, 19th residual block with the threshold of 0.07, 0.08. The optimized
ResNet-110 uses three-stage with the threshold of 0.07, 0.017, where"
"the 6, 11th residual block of the backbone network, respectively. For
the optimized ResNet-56, the position of three stages is the end of the
10, 19th residual block with the threshold of 0.07, 0.08. The optimized
ResNet-110 uses three-stage with the threshold of 0.07, 0.017, where
the position is the end of the 19, 37th residual block.
4.3.2. ResNets on CIFAR-100
We evaluate CoAxNN on the CIFAR-100 dataset by ResNet-56 and
ResNet-110, as shown in Table 4. Similarly, CoAxNN outperforms other
state-of-the-art methods. For example, the computational complexity of
optimized ResNet-110 by ASFP is 1.82E8 FLOPs, reduced by 28.20%
compared to the original neural network model, leading to a 1.48%
drop in top-1 accuracy. CoAxNN consumes 1.69E8 FLOPs, achieving
a higher computation reduction of 33.34% and a lower accuracy loss
of 1.30%. Although GHFP achieves a lower accuracy drop of 1.10%, it
uses a higher computational complexity of 1.82E8 FLOPs. These results
demonstrate the effectiveness of CoAxNN."
"a higher computation reduction of 33.34% and a lower accuracy loss
of 1.30%. Although GHFP achieves a lower accuracy drop of 1.10%, it
uses a higher computational complexity of 1.82E8 FLOPs. These results
demonstrate the effectiveness of CoAxNN.
In addition, Table 6 shows the configurations of the optimized mod-
els with the accuracy loss of 0.98% and 1.30%, searched by CoAxNN,
on the CIFAR-100 dataset. Despite the optimized ResNet-56 employing
a three-stage and deactivating pruning-based strategy, which is as same
as CIFAR-10, the thresholds are distinct. The optimized ResNet-56 uses
three-stage with the threshold of 0.7 and 0.065. Besides, The optimized
ResNet-110 adopts three-stage with a pruning rate of 0.1.
We also study the FLOPs and the percentage of predicted images of
the optimized model at each stage on CIFAR-100, as shown in Table 5.
CoAxNN uses three stages for the ResNet-56 and the ResNet-110 as
1.93E7
4.53E7
2.88E7
5.59E7
7.83E7
4.76E7
9.36E7
1.35E8
9.01E7
1.79E8
2.62E8
3.00E7
4.06E7"
"We also study the FLOPs and the percentage of predicted images of
the optimized model at each stage on CIFAR-100, as shown in Table 5.
CoAxNN uses three stages for the ResNet-56 and the ResNet-110 as
1.93E7
4.53E7
2.88E7
5.59E7
7.83E7
4.76E7
9.36E7
1.35E8
9.01E7
1.79E8
2.62E8
3.00E7
4.06E7
25.94
4.89E7
6.89E7
28.93
8.06E7
1.25E8
35.76
1.63E8
2.53E8
35.57
1 and
same as CIFAR-10. But, since the CIFAR-100 is more complex, more
complex models are required, leading to a smaller percentage of images
predicted at
2 than CIFAR-10. For example, for ResNet-56 on
CIFAR-10, the percentages of predicted images by
3 are
44.81%, 36.79%, and 18.40%, respectively. For ResNet-56 on CIFAR-
100, the percentages of predicted images by
3 are 29.67%,
32.85%, and 37.48%, respectively. For both CIFAR-10 and CIFAR-100,
most of the images on the whole dataset are predicted by the first few
stages with less computation. On the CIFAR-100, CoAxNN reduces the"
"100, the percentages of predicted images by
3 are 29.67%,
32.85%, and 37.48%, respectively. For both CIFAR-10 and CIFAR-100,
most of the images on the whole dataset are predicted by the first few
stages with less computation. On the CIFAR-100, CoAxNN reduces the
FLOPs by 23.93% and 33.34%, with an accuracy drop of 0.98% and
1.30%, for ResNet-56 and ResNet-110, respectively.
2, and
2, and
1,
1,
4.3.3. ResNets on CINIC-10
We utilize the CINIC-10 dataset, which consists of images from both
CIFAR and ImageNet [46], avoiding the time-consuming process of
model training on the entire ImageNet dataset, to facilitate experiments
for complicated image classification scenarios. We evaluate CoAxNN on
the CINIC-10 dataset by ResNet-18 and ResNet-50 models that are in
line with the model structures on the ImageNet dataset.
Table 7 shows the accuracy and computational cost of optimized
models. For ResNet-18, when the FLOPs are reduced from 5.49E8"
"the CINIC-10 dataset by ResNet-18 and ResNet-50 models that are in
line with the model structures on the ImageNet dataset.
Table 7 shows the accuracy and computational cost of optimized
models. For ResNet-18, when the FLOPs are reduced from 5.49E8
(i.e., the computational cost of the original ResNet-18, refer to Table 8)
to 2.21E8, reduced by 59.80%, the top-1 accuracy is dropped by 1.01%.
If the accuracy requirement is higher, CoAxNN can achieve 0.50%
accuracy loss while reducing the computational complexity by 43.71%
for the ResNet-18. ResNet-50 with a large number of computations is
improved by 0.10% in top-1 accuracy, and the corresponding FLOPs
is reduced from 1.18E9 (i.e., the computational cost of the original
ResNet-50, refer to Table 8) to 4.63E8, reduced by 60.75% in compu-
tational complexity. We compare CoAxNN with state-of-the-art model
optimization methods, FPC [47] and CCPrune [48]. FPC reduces the
computational complexity by 40.48% (7.76E8 FLOPs) while increas-"
"ResNet-50, refer to Table 8) to 4.63E8, reduced by 60.75% in compu-
tational complexity. We compare CoAxNN with state-of-the-art model
optimization methods, FPC [47] and CCPrune [48]. FPC reduces the
computational complexity by 40.48% (7.76E8 FLOPs) while increas-
ing the top-1 accuracy by 1.14% for the ResNet-50 model. CCPrune
increases the top-1 accuracy of the ResNet-50 model by 0.23% with
a computational complexity of 7.44E8 FLOPs. CoAxNN reduces the
computational complexity by 49.73% (5.93E8 FLOPs) with a 0.38%
improvement in top-1 accuracy. By effectively combining stage-based
with pruning-based approximate strategies, CoAxNN achieves better
performance than existing methods.
Moreover, we analyze the FLOPs and predicted images at each stage
for the optimized ResNet-18 and ResNet-50 with a 1.01% and −0.10%
accuracy drop respectively, as shown in Table 8. For the CINIC-10
dataset, both the ResNet-18 and the ResNet-50 use four stages. More"
"Moreover, we analyze the FLOPs and predicted images at each stage
for the optimized ResNet-18 and ResNet-50 with a 1.01% and −0.10%
accuracy drop respectively, as shown in Table 8. For the CINIC-10
dataset, both the ResNet-18 and the ResNet-50 use four stages. More
than 80% of the images are finished in the previous two stages, and
less than 10% of images are predicted in the last stage.
Table 9 shows the configurations of the ResNet-18 and the ResNet-
50. ResNet-18 uses four-stage with thresholds of 0.23, 0.2, and 0.4,
whose position is the end of the 3, 5, and 7th residual block, and the
pruning rate is 0.3. When the sample does not exit from the first few
stages, it must be exited from the last stage. Therefore, the last stage
JournalofSystemsArchitecture143(2023)1029789G. Li et al.
Table 4
Performance of optimized neural network models on CIFAR-100 (see [44,45]).
Model
Method
Top-1 Acc.
Baseline (%)
Top-1 Acc.
Accelerated (%)
Top-1 Acc.
Drop (%)
ResNet-56
ResNet-110
MIL [39]
CoAxNN (0.98%)"
"JournalofSystemsArchitecture143(2023)1029789G. Li et al.
Table 4
Performance of optimized neural network models on CIFAR-100 (see [44,45]).
Model
Method
Top-1 Acc.
Baseline (%)
Top-1 Acc.
Accelerated (%)
Top-1 Acc.
Drop (%)
ResNet-56
ResNet-110
MIL [39]
CoAxNN (0.98%)
CoAxNN (2.36%)
MIL [39]
SFP [2]
ASFP [8]
ASRFP [38]
GHFP [44]
AHSG-HT [45]
CoAxNN (1.30%)
CoAxNN (3.42%)
71.33
72.75
72.75
72.79
74.14
74.39
74.39
74.39
74.46
74.17
74.17
68.37
71.77 (±0.28)
70.39 (±0.11)
70.78
71.28
72.91
73.02
73.29
72.74
72.87 (±0.19)
70.75 (±0.38)
2.96
0.98
2.36
2.01
2.86
1.48
1.37
1.10
1.72
1.30
3.42
#FLOPs
7.63E7
9.55E7
7.46E7
1.73E8
1.21E8
1.82E8
1.82E8
1.82E8
–
1.69E8
1.15E8
FLOPs ↓
(%)
39.30
23.93
40.53
31.30
52.30
28.20
28.20
28.20
29.30
33.34
54.47
Table 5
Analysis of optimized models on CIFAR-100.
Model (Acc.Drop)
Stages
CoAxNN
Percentage
#FLOPs
Avg. #FLOPs
Baseline
#FLOPs
FLOPs ↓ (%)
ResNet-56
(0.98%)
ResNet-110
(1.30%)
1
2
3
1
2
3
Table 6"
"(%)
39.30
23.93
40.53
31.30
52.30
28.20
28.20
28.20
29.30
33.34
54.47
Table 5
Analysis of optimized models on CIFAR-100.
Model (Acc.Drop)
Stages
CoAxNN
Percentage
#FLOPs
Avg. #FLOPs
Baseline
#FLOPs
FLOPs ↓ (%)
ResNet-56
(0.98%)
ResNet-110
(1.30%)
1
2
3
1
2
3
Table 6
Configurations optimized by GA-based DSE for CIFAR-100.
Model (Acc.Drop)
Configurations
ResNet-56
(0.98%)
ResNet-110
(1.30%)
Rate
Stage
Position
Threshold
Rate
Stage
Position
Threshold
0
1
10
0.7
0.1
1
19
0.73
29.67%
32.85%
37.48%
27.60%
30.18%
42.22%
2
19
0.65
2
37
0.62
3
–
–
3
–
–
has no threshold value. The ResNet-34 uses four-stage with thresholds
of 0.08, 0.09, and 0.09, whose position is the end of the 4, 8, and 14th
residual block, and the pruning rate is 0.2.
Summary. As shown in Tables 1, 4, and 7, CoAxNN, which auto-
matically finds (near)-optimal configurations for effectively combining
staging-based and pruning-based approximate strategies, is comparable"
"residual block, and the pruning rate is 0.2.
Summary. As shown in Tables 1, 4, and 7, CoAxNN, which auto-
matically finds (near)-optimal configurations for effectively combining
staging-based and pruning-based approximate strategies, is comparable
to the state-of-the-art methods. The staging-based approximate strate-
gies perform adaptive inference for inputs according to conditions at
run-time. The inference of simple input can be terminated with a good
prediction confidence in the earlier stage, thereby avoiding remaining
layerwise computations, so that the overall computation cost can be
significantly reduced. However, the number of model parameters is
still too large to be deployed on mobile devices. The pruning-based
approximate strategies remove the unimportant weights or filters to
gain a thinner model. However, the pruning method lacks the ability
to configure the neural network dynamically, which will miss the
opportunities to optimize the model inference. Based on these previ-"
"approximate strategies remove the unimportant weights or filters to
gain a thinner model. However, the pruning method lacks the ability
to configure the neural network dynamically, which will miss the
opportunities to optimize the model inference. Based on these previ-
ous mentioned optimization principles, CoAxNN automatically finds
(near-)optimal configurations by GA-based DSE, making full use of the
advantages of both, thus achieving efficient model optimization.
4.4. Realistic performance of on-device inference
To demonstrate the realistic speedup and energy savings of our
approximate compressed multi-stage models, we evaluate the perfor-
mance of models on a representative intelligent edge device, Jetson
AGX Orin.
4.76E7
9.36E7
1.35E8
8.23E7
1.59E8
2.32E8
9.55E7
1.25E8
23.93
1.69E8
2.53E8
33.34
For the measurement of inference latency, on the one hand, we pre-
execute each neural network model 10 times to warm up the machine,
and then repeat the single-batch inference 100 times to record the"
"4.76E7
9.36E7
1.35E8
8.23E7
1.59E8
2.32E8
9.55E7
1.25E8
23.93
1.69E8
2.53E8
33.34
For the measurement of inference latency, on the one hand, we pre-
execute each neural network model 10 times to warm up the machine,
and then repeat the single-batch inference 100 times to record the
average execution time to reduce the interference, such as system
initialization. On the other hand, after executing all the operators on
the device we insert synchronous instructions to obtain timestamps,
thus avoiding inaccurate measurements for inference time. Table 10
depicts the inference latency for the optimized ResNet-20, ResNet-
32, ResNet-56, and ResNet-110 by CoAxNN, respectively dropped by
0.67%, 0.84%, 0.74%, and 0.63% in top-1 accuracy on the CIFAR-
10 dataset. The results show that CoAxNN can accelerate ResNet-20,
ResNet-32, ResNet-56, and ResNet-110 models by 1.33×, 1.34×, 1.53×,
and 1.51×, respectively. In general, the larger models can obtain a more
significant speedup."
"0.67%, 0.84%, 0.74%, and 0.63% in top-1 accuracy on the CIFAR-
10 dataset. The results show that CoAxNN can accelerate ResNet-20,
ResNet-32, ResNet-56, and ResNet-110 models by 1.33×, 1.34×, 1.53×,
and 1.51×, respectively. In general, the larger models can obtain a more
significant speedup.
To analyze the energy consumption of optimized models, we use
the jetson-stats3 to monitor the power of the system. We per-
form 10 000 times single-batch inference for ResNet-20, ResNet-32,
ResNet-56, and ResNet-110 on Jetson AGX Orin, and the instantaneous
powers are obtained to multiply the average inference time per image
to compute the energy consumption of models. Table 11 shows the
energy consumption for ResNet-20, ResNet-32, ResNet-56, and ResNet-
110 with the accuracy loss of 0.67%, 0.84%, 0.74%, and 0.63% on
the CIFAR-10 dataset. CoAxNN reduces the energy consumption of
ResNet-20, ResNet-32, ResNet-56, and ResNet-110 by 25.17%, 25.68%,
34.61%, and 33.81%, respectively. The experimental results show that"
"110 with the accuracy loss of 0.67%, 0.84%, 0.74%, and 0.63% on
the CIFAR-10 dataset. CoAxNN reduces the energy consumption of
ResNet-20, ResNet-32, ResNet-56, and ResNet-110 by 25.17%, 25.68%,
34.61%, and 33.81%, respectively. The experimental results show that
the optimized models by CoAxNN can be improved in terms of energy
consumption, and the more complex neural network models can save
more energy.
We also evaluate the realistic speedup and energy reduction of mod-
els optimized by existing filter pruning approaches [2,8,11]. Tables 12
and 13 show the execute latency and energy consumption of single-
batch inference of optimized ResNet-20, ResNet-32, ResNet-56, and
ResNet-110 by filter pruning with the accuracy loss of 2.32%, 1.12%,
0.23%, and 0.10%, on the CIFAR-10 dataset, respectively. Compared
with the baseline models, the optimized models have higher execution
latency and more energy consumption. Although filter pruning can
reduce theoretical computation costs and memory footprint, the opti-"
"0.23%, and 0.10%, on the CIFAR-10 dataset, respectively. Compared
with the baseline models, the optimized models have higher execution
latency and more energy consumption. Although filter pruning can
reduce theoretical computation costs and memory footprint, the opti-
mized models cannot obtain actual acceleration and energy reduction
3 https://pypi.org/project/jetson-stats/.
JournalofSystemsArchitecture143(2023)10297810G. Li et al.
Table 7
Performance of optimized neural network models on CINIC-10.
Model
Method
Top-1 Acc.
Baseline (%)
Top-1 Acc.
Accelerated (%)
Top-1 Acc.
Drop (%)
ResNet-18
ResNet-50
CoAxNN (0.50%)
CoAxNN (1.01%)
FPC [47]
CCPrune [48]
CoAxNN (−0.38%)
CoAxNN (−0.10%)
87.57
87.57
86.63
88.30
88.52
88.52
87.07 (±0.29)
86.56 (±0.43)
87.77
88.53
88.14 (±0.15)
88.62 (±0.34)
0.50
1.01
−1.14
−0.23
−0.38
−0.10
#FLOPs
3.09E8
2.21E8
7.76E8
7.44E8
5.93E8
4.63E8
FLOPs ↓
(%)
43.71
59.80
40.48
–
49.73
60.75
Table 8
Analysis of optimized models on CINIC-10.
Model (Acc.Drop)"
"86.63
88.30
88.52
88.52
87.07 (±0.29)
86.56 (±0.43)
87.77
88.53
88.14 (±0.15)
88.62 (±0.34)
0.50
1.01
−1.14
−0.23
−0.38
−0.10
#FLOPs
3.09E8
2.21E8
7.76E8
7.44E8
5.93E8
4.63E8
FLOPs ↓
(%)
43.71
59.80
40.48
–
49.73
60.75
Table 8
Analysis of optimized models on CINIC-10.
Model (Acc.Drop)
Stages
CoAxNN
Percentage
#FLOPs
Avg. #FLOPs
Baseline
#FLOPs
FLOPs ↓ (%)
ResNet-18
(1.01%)
ResNet-50
(−0.10%)
1
2
3
4
1
2
3
4
50.86%
30.96%
13.13%
5.05%
39.90%
41.58%
9.27%
9.26%
1.35E8
2.57E8
3.79E8
4.56E8
2.11E8
4.91E8
8.66E8
1.02E9
2.21E8
5.49E8
59.80
4.63E8
1.18E9
60.75
Table 9
Configurations optimized by GA-based DSE for CINIC-10.
Model (Acc.Drop)
Configurations
ResNet-18
(1.01%)
ResNet-50
(−0.10%)
Rate
Stage
Position
Threshold
Rate
Stage
Position
Threshold
0.3
1
3
0.23
0.2
1
4
0.08
2
5
0.2
2
8
0.09
3
7
0.4
3
14
0.09
4
–
–
4
–
–
Table 12
Speedups of optimized models by existing pruning approaches [2,8,11] on Jetson AGX
Orin.
Model (Acc.Drop)
Latency (ms)"
"ResNet-50
(−0.10%)
Rate
Stage
Position
Threshold
Rate
Stage
Position
Threshold
0.3
1
3
0.23
0.2
1
4
0.08
2
5
0.2
2
8
0.09
3
7
0.4
3
14
0.09
4
–
–
4
–
–
Table 12
Speedups of optimized models by existing pruning approaches [2,8,11] on Jetson AGX
Orin.
Model (Acc.Drop)
Latency (ms)
Speedup
Baseline
Filter pruning
ResNet-20
(2.32%)
ResNet-32
(1.12%)
ResNet-56
(0.23%)
ResNet-110
(0.10%)
6.26
9.55
16.89
32.33
8.70
13.73
22.51
42.59
0.72
0.70
0.75
0.76
Table 10
Speedups of optimized models by CoAxNN on Jetson AGX Orin.
Model (Acc.Drop)
Latency (ms)
Speedup
Baseline
CoAxNN
Table 13
Energy reductions of optimized models by existing pruning approaches [2,8,11] on
Jetson AGX Orin.
Model (Acc.Drop)
Energy (mJ)
Reduction
ResNet-20
(0.67%)
ResNet-32
(0.84%)
ResNet-56
(0.74%)
ResNet-110
(0.63%)
6.26
9.55
16.89
32.33
4.69
7.11
11.05
21.4
1.33
1.34
1.53
1.51
ResNet-20
(2.32%)
ResNet-32
(1.12%)
ResNet-56
(0.23%)
ResNet-110
(0.10%)
Baseline
27.89
42.79
76.57"
"Energy (mJ)
Reduction
ResNet-20
(0.67%)
ResNet-32
(0.84%)
ResNet-56
(0.74%)
ResNet-110
(0.63%)
6.26
9.55
16.89
32.33
4.69
7.11
11.05
21.4
1.33
1.34
1.53
1.51
ResNet-20
(2.32%)
ResNet-32
(1.12%)
ResNet-56
(0.23%)
ResNet-110
(0.10%)
Baseline
27.89
42.79
76.57
Filter pruning
45.74
72.47
−63.99%
−69.36%
119.24
−55.73%
146.69
225.34
−53.61%
Table 11
Energy reductions of optimized models by CoAxNN on Jetson AGX Orin.
Model (Acc.Drop)
Energy (mJ)
Reduction
ResNet-20
(0.67%)
ResNet-32
(0.84%)
ResNet-56
(0.74%)
ResNet-110
(0.63%)
Baseline
27.89
42.79
76.57
146.69
CoAxNN
20.87
31.8
50.07
97.10
25.17%
25.68%
34.61%
33.81%
Fig. 5. Accuracy of the optimization model at different stages. ‘‘CoAxNN-ALL’’ and
‘‘CoAxNN-ACT’’ denote the accuracy of the model at each stage on the whole dataset
and on the images that satisfy the activation condition of the corresponding stage,
respectively.
JournalofSystemsArchitecture143(2023)10297811G. Li et al."
"‘‘CoAxNN-ACT’’ denote the accuracy of the model at each stage on the whole dataset
and on the images that satisfy the activation condition of the corresponding stage,
respectively.
JournalofSystemsArchitecture143(2023)10297811G. Li et al.
Fig. 6. Example images predicated correctly at different stages.
Table 14
Overheads of GA-based DSE.
Model
ResNet-20
ResNet-32
ResNet-56
ResNet-110
GA time (s)
Training time (s)
1.15
1.60
1.69
1.46
5472
1813
2720
7712
on Jetson AGX Orin. Therefore, the critical motivation of CoAxNN is to
find a satisfying optimization configuration for practical scenarios.
4.5. Ablation study
Accuracy of CoAxNN models at different stages. We study the accu-
racy of ResNet-56 optimized by CoAxNN at different stages, as shown
in Fig. 5. In ‘‘CoAxNN-ALL’’, the accuracy of the model in the first few
stages is lower than that of the baseline model. As the computational
complexity of the model increases, the accuracy in the later stages"
"racy of ResNet-56 optimized by CoAxNN at different stages, as shown
in Fig. 5. In ‘‘CoAxNN-ALL’’, the accuracy of the model in the first few
stages is lower than that of the baseline model. As the computational
complexity of the model increases, the accuracy in the later stages
gradually converges to that of the baseline model. CoAxNN separates
the prediction of simple and complex images by conditional activation,
allowing simple images to exit from the first few stages and complex
images to exit from the latter stages. In ‘‘CoAxNN-ACT’’, the accuracy
of the first few stages becomes higher and even exceeds that of the
baseline model, which indicates that the first few stages have sufficient
ability to classify simple images. Besides, since complex images are
predicted by the later stages, the accuracy of the last stage of the
optimization model is lower than that of the baseline model.
Visualization results at different stages. Fig. 6 depicts the predicted"
"ability to classify simple images. Besides, since complex images are
predicted by the later stages, the accuracy of the last stage of the
optimization model is lower than that of the baseline model.
Visualization results at different stages. Fig. 6 depicts the predicted
sample images for each stage of optimized ResNet-56 on CIFAR-10.
The samples predicated at stage
1 are relatively ‘‘easy’’, which have
a small number of objects and clear background, whereas the samples
predicated at stage
3 are relatively ‘‘hard’’, which have various
objects and complex background. CoAxNN can separate ‘‘easy’’ images
consuming less effort from ‘‘hard’’ ones consuming more computation,
significantly reducing computation costs for neural network models.
2 and
Overheads of GA-based DSE. We collect the latency of each operator
of the neural network model on the edge device in the profiling phase
beforehand to be used in GA-based search. We perform the model"
"significantly reducing computation costs for neural network models.
2 and
Overheads of GA-based DSE. We collect the latency of each operator
of the neural network model on the edge device in the profiling phase
beforehand to be used in GA-based search. We perform the model
optimization processes, including model training and GA-based search,
on a server with Intel Xeon CPUs and an Nvidia A100 GPU. The
inference of optimized models is performed on edge devices such as
Jetson AGX Orin. Table 14 shows the times for GA-based search and
the time to train the model once during the model optimization. The
GA-based DSE takes 1–2 s on the CPU platform, which is greatly less
than model training (e.g., ResNet-20 takes 5472 s for training once).
Therefore, the runtime overhead of the GA is negligible.
5. Discussion
Generality. CoAxNN is a generic framework for optimizing on-device
deep learning via model approximation, which can be generalized
to other intelligent tasks such as object detection [49]. In addition,"
"Therefore, the runtime overhead of the GA is negligible.
5. Discussion
Generality. CoAxNN is a generic framework for optimizing on-device
deep learning via model approximation, which can be generalized
to other intelligent tasks such as object detection [49]. In addition,
more approximate strategies such as knowledge distillation [50] can
be integrated into CoAxNN to further optimize neural network models.
Applicability. CoAxNN is system-independent, which not requires spe-
cific software implementations and hardware design support. The op-
timized models by CoAxNN can be directly deployed on the target
platform, especially intelligent edge accelerators. Users can choose
the (near)-optimal model according to the accuracy and performance
requirements of intelligent tasks. Moreover, the time-consuming opti-
mization process can be performed offline on high-performance servers,
achieving efficient fine-tuning.
Limitations. Although CoAxNN shows the advantages of combining"
"requirements of intelligent tasks. Moreover, the time-consuming opti-
mization process can be performed offline on high-performance servers,
achieving efficient fine-tuning.
Limitations. Although CoAxNN shows the advantages of combining
staging-based with pruning-based approximate strategies for model
optimization, there is still room for further improvement. On one hand,
the NSGA-III used in GA-based DSE cannot always find the optimal
solutions for the goals of increasing accuracy and decreasing latency.
We will explore other genetic algorithms such as NPGA [51] for multi-
objective optimization. On the other hand, the fixed-rate filter pruning
strategy is used in CoAxNN. Prior works [11] demonstrated that differ-
ent layers have different sensitives for model accuracy. Setting different
pruning ratios for different layers can potentially further improve the
performance, which will be explored in future studies.
6. Conclusion
In this paper, we proposed an efficient optimization framework,"
"ent layers have different sensitives for model accuracy. Setting different
pruning ratios for different layers can potentially further improve the
performance, which will be explored in future studies.
6. Conclusion
In this paper, we proposed an efficient optimization framework,
CoAxNN, which effectively combines staging-based with pruning-based
approximate strategies for efficient model
inference on resource-
constrained edge devices. Evaluation with state-of-the-art CNN models
demonstrates the effectiveness of CoAxNN, which can significantly im-
prove the performance with trivial accuracy loss. We plan to integrate
more model approximate strategies into CoAxNN in future work.
Declaration of competing interest
The authors declare that they have no known competing finan-
cial interests or personal relationships that could have appeared to
influence the work reported in this paper.
Data availability
Data will be made available on request.
JournalofSystemsArchitecture143(2023)10297812G. Li et al."
"The authors declare that they have no known competing finan-
cial interests or personal relationships that could have appeared to
influence the work reported in this paper.
Data availability
Data will be made available on request.
JournalofSystemsArchitecture143(2023)10297812G. Li et al.
Acknowledgments
This work is supported by the National Key R&D Program of China
(2021ZD0110101), the National Natural Science Foundation of China
(62232015, 62302479), the China Postdoctoral Science Foundation
(2023M733566), and the CCF-Baidu Open Fund, China."
"Reference [1]: K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in:
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
2016, pp. 770–778."
"Reference [2]: Y. He, G. Kang, X. Dong, Y. Fu, Y. Yang, Soft filter pruning for accelerating
in: Proceedings of the Twenty-Seventh
Intelligence (IJCAI), 2018, pp.
deep convolutional neural networks,
International Joint Conference on Artificial
2234–2240."
"Reference [3]: S.K. Esser, J.L. McKinstry, D. Bablani, R. Appuswamy, D.S. Modha, Learned
step size quantization, in: International Conference on Learning Representations,
2020."
"Reference [4]: Y. Guo, A. Yao, Y. Chen, Dynamic network surgery for efficient dnns,
in:
Advances in Neural Information Processing Systems, Vol. 29, 2016."
"Reference [5]: S. Han, J. Pool, J. Tran, W. Dally, Learning both weights and connections for
efficient neural network, in: Advances in Neural Information Processing Systems,
Vol. 28, 2015."
"Reference [6]: B. Hassibi, D. Stork, Second order derivatives for network pruning: Optimal brain
surgeon, in: Advances in Neural Information Processing Systems, Vol. 5, 1992."
"Reference [7]: B. Hassibi, D.G. Stork, G.J. Wolff, Optimal brain surgeon and general network
pruning, in: IEEE International Conference on Neural Networks, IEEE, 1993, pp.
293–299."
"Reference [8]: Y. He, X. Dong, G. Kang, Y. Fu, C. Yan, Y. Yang, Asymptotic soft filter pruning
for deep convolutional neural networks, IEEE Trans. Cybern. 50 (8) (2019)
3594–3604."
"Reference [9]: G. Li, X. Ma, X. Wang, L. Liu, J. Xue, X. Feng, Fusion-catalyzed pruning for
optimizing deep learning on intelligent edge devices, IEEE Trans. Comput.-Aided
Des. Integr. Circuits Syst. 39 (11) (2020) 3614–3626."
"Reference [10]: J.-H. Luo, J. Wu, W. Lin, Thinet: A filter level pruning method for deep neural
network compression, in: Proceedings of the IEEE International Conference on
Computer Vision, 2017, pp. 5058–5066."
"Reference [11]: G. Li, X. Ma, X. Wang, H. Yue, J. Li, L. Liu, X. Feng, J. Xue, Optimizing deep
neural networks on intelligent edge accelerators via flexible-rate filter pruning,
J. Syst. Archit. (2022) 102431."
"Reference [12]: J. Plochaet, T. Goedemé, Hardware-aware pruning for FPGA deep learning
accelerators, in: Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, 2023, pp. 4481–4489."
"Reference [13]: X. Zhuang, Y. Ge, B. Zheng, Q. Wang, Adversarial network pruning by filter
robustness estimation, in: ICASSP 2023-2023 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2023, pp. 1–5."
"Reference [14]: Z. Liu, M. Sun, T. Zhou, G. Huang, T. Darrell, Rethinking the value of network
pruning, in: International Conference on Learning Representations (ICLR), 2019."
"Reference [15]: Y. Li, K. Adamczewski, W. Li, S. Gu, R. Timofte, L. Van Gool, Revisiting
random channel pruning for neural network compression, in: Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp.
191–201."
"Reference [16]: Y. Li, P. Zhao, G. Yuan, X. Lin, Y. Wang, X. Chen, Pruning-as-search: Efficient
neural architecture search via channel pruning and structural reparameterization,
in: Proceedings of the Thirty-First International Joint Conference on Artificial
Intelligence, 2022, pp. 3236–3242."
"Reference [17]: Y. Ding, Y. Wu, C. Huang, S. Tang, F. Wu, Y. Yang, W. Zhu, Y. Zhuang, NAP:
Neural architecture search with pruning, Neurocomputing 477 (2022) 85–95."
"Reference [18]: S. Teerapittayanon, B. McDanel, H.-T. Kung, Branchynet: Fast inference via early
exiting from deep neural networks, in: 2016 23rd International Conference on
Pattern Recognition (ICPR), IEEE, 2016, pp. 2464–2469."
"Reference [19]: P. Panda, A. Sengupta, K. Roy, Conditional deep learning for energy-efficient
and enhanced pattern recognition, in: 2016 Design, Automation & Test in Europe
Conference & Exhibition (DATE), IEEE, 2016, pp. 475–480."
"Reference [20]: Y. Yang, D. Liu, H. Fang, Y.-X. Huang, Y. Sun, Z.-Y. Zhang, Once for all skip:
efficient adaptive deep neural networks, in: 2022 Design, Automation & Test in
Europe Conference & Exhibition (DATE), IEEE, 2022, pp. 568–571."
"Reference [21]: B. Fang, X. Zeng, F. Zhang, H. Xu, M. Zhang, FlexDNN: Input-adaptive on-device
deep learning for efficient mobile vision, in: 2020 IEEE/ACM Symposium on Edge
Computing (SEC), IEEE, 2020, pp. 84–95."
"Reference [22]: Y. Wang, J. Shen, T.-K. Hu, P. Xu, T. Nguyen, R. Baraniuk, Z. Wang, Y. Lin,
Dual dynamic inference: Enabling more efficient, adaptive, and controllable deep
inference, IEEE J. Sel. Top. Sign. Proces. 14 (4) (2020) 623–633."
"Reference [23]: M. Figurnov, M.D. Collins, Y. Zhu, L. Zhang, J. Huang, D. Vetrov, R. Salakhutdi-
nov, Spatially adaptive computation time for residual networks, in: Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp.
1039–1048."
"Reference [24]: N.K. Jayakodi, A. Chatterjee, W. Choi, J.R. Doppa, P.P. Pande, Trading-off
accuracy and energy of deep inference on embedded systems: A co-design
approach, IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 37 (11) (2018)
2881–2893."
"Reference [25]: Z. Liang, Y. Zhou, Dispense mode for inference to accelerate branchynet, in:
2022 IEEE International Conference on Image Processing (ICIP), IEEE, 2022, pp.
1246–1250."
"Reference [26]: J. Jo, G. Kim, S. Kim, J. Park, LoCoExNet: Low-cost early exit network for energy
efficient CNN accelerator design, IEEE Trans. Comput.-Aided Des. Integr. Circuits
Syst. (2023)."
"Reference [27]: K. Park, C. Oh, Y. Yi, Bpnet: branch-pruned conditional neural network for
systematic time-accuracy tradeoff, in: 2020 57th ACM/IEEE Design Automation
Conference (DAC), IEEE, 2020, pp. 1–6."
"Reference [28]: G. Park, Y. Yi, Condnas: neural architecture search for conditional CNNs,
Electronics 11 (7) (2022) 1101."
"Reference [29]: Y. He, J. Lin, Z. Liu, H. Wang, L.-J. Li, S. Han, Amc: Automl for model
compression and acceleration on mobile devices, in: Proceedings of the European
Conference on Computer Vision (ECCV), 2018, pp. 784–800."
"Reference [30]: Y. Qian, Z. He, Y. Wang, B. Wang, X. Ling, Z. Gu, H. Wang, S. Zeng, W. Swaileh,
Hierarchical threshold pruning based on uniform response criterion, IEEE Trans.
Neural Netw. Learn. Syst. (2023)."
"Reference [31]: K. Wang, D. Zhang, Y. Li, R. Zhang, L. Lin, Cost-effective active learning for deep
image classification, IEEE Trans. Circuits Syst. Video Technol. 27 (12) (2016)
2591–2600."
"Reference [32]: S. Anwar, K. Hwang, W. Sung, Structured pruning of deep convolutional neural
networks, ACM J. Emerg. Technol. Comput. Syst. (JETC) 13 (3) (2017) 1–18."
"Reference [33]: J.H. Holland, Adaptation in Natural and Artificial Systems: An Introductory
Analysis with Applications To Biology, Control, and Artificial Intelligence, MIT
Press, 1992."
"Reference [34]: A. Mohammadi, H. Asadi, S. Mohamed, K. Nelson, S. Nahavandi, OpenGA, a C++
genetic algorithm library, in: 2017 IEEE International Conference on Systems,
Man, and Cybernetics (SMC), IEEE, 2017, pp. 2051–2056."
"Reference [35]: K. Deb, H. Jain, An evolutionary many-objective optimization algorithm using
reference-point-based nondominated sorting approach, part I: solving problems
with box constraints, IEEE Trans. Evol. Comput. 18 (4) (2013) 577–601."
"Reference [36]: A. Krizhevsky, G. Hinton, et al., Learning multiple layers of features from tiny
images, 2009."
"Reference [37]: L.N. Darlow, E.J. Crowley, A. Antoniou, A.J. Storkey, Cinic-10 is not imagenet
or cifar-10, 2018, arXiv preprint arXiv:1810.03505."
"Reference [38]: L. Cai, Z. An, C. Yang, Y. Xu, Softer pruning, incremental regularization, in:
2020 25th International Conference on Pattern Recognition (ICPR), IEEE, 2021,
pp. 224–230."
"Reference [39]: X. Dong, J. Huang, Y. Yang, S. Yan, More is less: A more complicated network
in: Proceedings of the IEEE Conference on
with less inference complexity,
Computer Vision and Pattern Recognition, 2017, pp. 5840–5848."
"Reference [40]: Y. He, P. Liu, Z. Wang, Z. Hu, Y. Yang, Filter pruning via geometric median
for deep convolutional neural networks acceleration,
in: Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp.
4340–4349."
"Reference [41]: X. Dong, Y. Yang, Network pruning via transformable architecture search, Adv.
Neural Inf. Process. Syst. 32 (2019)."
"Reference [42]: Y. He, X. Zhang, J. Sun, Channel pruning for accelerating very deep neural
networks, in: Proceedings of the IEEE International Conference on Computer
Vision, 2017, pp. 1389–1397."
"Reference [43]: S. Lin, R. Ji, C. Yan, B. Zhang, L. Cao, Q. Ye, F. Huang, D. Doermann,
Towards optimal structured cnn pruning via generative adversarial
learning,
in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition, 2019, pp. 2790–2799."
"Reference [44]: L. Cai, Z. An, C. Yang, Y. Xu, Soft and hard filter pruning via dimension
reduction, in: 2021 International Joint Conference on Neural Networks (IJCNN),
IEEE, 2021, pp. 1–8."
"Reference [45]: X. Yang, H. Lu, H. Shuai, X.-T. Yuan, Pruning convolutional neural networks
via stochastic gradient hard thresholding, in: Pattern Recognition and Computer
Vision: Second Chinese Conference, PRCV 2019, Xi’an, China, November 8–11,
2019, Proceedings, Part I, Springer, 2019, pp. 373–385."
"Reference [46]: O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A.
Karpathy, A. Khosla, M. Bernstein, et al., Imagenet large scale visual recognition
challenge, Int. J. Comput. Vis. 115 (3) (2015) 211–252."
"Reference [47]: Y. Chen, X. Wen, Y. Zhang, Q. He, FPC: Filter pruning via the contribution
of output feature map for deep convolutional neural networks acceleration,
Knowl.-Based Syst. 238 (2022) 107876."
"Reference [48]: Y. Chen, X. Wen, Y. Zhang, W. Shi, CCPrune: Collaborative channel pruning for
learning compact convolutional networks, Neurocomputing 451 (2021) 35–45."
"Reference [49]: X. Chen, H. Ma, J. Wan, B. Li, T. Xia, Multi-view 3d object detection network
for autonomous driving, in: Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, 2017, pp. 1907–1915."
"Reference [50]: G. Hinton, O. Vinyals, J. Dean, et al., Distilling the knowledge in a neural
network, arXiv preprint arXiv:1503.02531 2 (7) (2015)."
"Reference [51]: J. Horn, N. Nafpliotis, D.E. Goldberg, Multiobjective optimization using the
niched Pareto genetic algorithm, 1993"
|