期刊论文 Journal Papers
 施路平，裴京，赵蓉. “面向人工通用智能的类脑计算”. 人工智能 : 类脑计算与脑科学. 2020(1):6-15. （《人工智能》杂志由工业和信息化部主管，中国电子信息产业发展研究院、赛迪工业和信息化研究院（集团）有限公司主办（CN10-1530/TP，ISSN2096-5036））
 Z. Chen, L. Deng, B. Wang,G. Li* and Y. Xie, A Comprehensive and Modularized Statistical Framework for Gradient Norm Equality in Deep Neural Networks, IEEE Transactions on Pattern Machine Intelligence (IEEE TPAMI), Accepted, in press, 2020.
 L. Deng, G. Li*, H. Song, Y. Xie and L. Shi, Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey, Proceedings of the IEEE (PIEEE), 108 (4), 485-532, 2020.
 L. Tian, Z. Z. Wu, S. Wu, L.P. Shi*, “Hybrid Neural State Machine for neural network”, Journal of SCIENCE CHINA Information Sciences（2020）
 Z. Chen, L. Deng, G. Li*, J. Sun, X. Hu, L. Liang and Y. Xie, Effective and Efficient Batch Normalization Using Few Uncorrelated Data for Statistics’ Estimation, IEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), Accepted, in press, 2020.
 L．Deng, G. Wang, G. Li , S. Li, L Liang, M Zhu, Y Wu, Z Yang, Z Zou, Z.Wu, X.Hu, Y.Ding, W. He, Y. Xie, L. Shi* , Tianjic: A Unified and Scalable Chip Bridging Spike-Based and Continuous Neural Computation, IEEE Journal of Solid-State Circuits (IEEE JSSC), vol. 55, pp. 2228 – 2246, 2020.
 L. Deng, L. Liang , G. Wang, L. Chang, X. Hu, L. Liu, J. Pei, G. Li* and Y.Xie, SemiMap: A Semi-folded Convolution Mapping for Speed-Overhead Balance on Crossbars,IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (IEEE TCAD), vol. 39, pp.117-130, 2020.
 D. Wang, G. Zhao,G. Li*, L. Deng, Y. Wu, Compressing 3DCNNs based on Tensor Train Decomposition, Neural Networks, Accepted, in press, 2020.
 L. Deng, Y. Wu, X. Hu, L. Liang, Y. Ding, G. Li, G*. Zhao, P. Li, Y. Xie, Rethinking the performance comparison between SNNS and ANNS, Neural Networks, 121, 294-307, 2020.
 B. Wu, D. Wang, G. Zhao, L. Deng and G. Li*, Hybrid Tensor Decomposition in Neural Network Compression, Neural Networks, Accepted, in press, 2020.
 Lyu, J. , Pei, J. , Guo, Y. , Gong, J. , & Li, H*.. A New Opportunity for 2D van der Waals Heterostructures: Making Steep‐Slope Transistors. Advanced Materials, 2020, 32(2).
 J. Pei, L. Deng, S. Song, M. Zhao, Y. Zhang, S. Wu, G. Wang, Z. Zou, Z. Wu, W. He, F. Chen, N. Deng, S. Wu, Y. Wang, Y. Wu, Z. Yang, C. Ma, G. Li, W. Han, H. Li, H. Wu, R. Zhao, Y. Xie & L.P. Shi*, “Towards artificial general intelligence with hybrid Tianjic chip architecture”, Nature, 572, pp106-111, (2019).
 Y. Yang, L. Deng, S. Wu, T. Yan, Y. Xie and G. Li*, Training high-performance and large-scale deep neural networks with full 8-bit integers, Neural Networks, 121, 294-307, 2019.
 K. Song, X. Chen, P. Tang,G. Li*, L. Deng and J. Pei, Target Controllability of Two-layer Multiplex Networks based on Network Flow Theory, IEEE Transactions on Cybernetics, Accepted, in press, 2019.
 Z.Y. Zhang, T.R. Li, Y.J. Wu, Y.J. Jia, C.W. Tan, X.T. Xu, G.R. Wang, J. Lv, W. Zhang, Y.H. He, J. Pei, C. Ma, G.Q. Li, H.Z. Xu, L.P. Shi*, H.L. Peng, H.L. Li, “Truly Concomitant and Independently Expressed Short‐and Long‐Term Plasticity in a Bi2O2Se‐Based Three‐Terminal Memristor”, Advanced Materials, 31(3) 1805769, (2019)
 S. Wu, G.Q. Li, L. Deng, L. Liu, Y. Xie, L.P. Shi*, “L1-Norm Batch Normalization for Efficient Training of Deep Neural Networks”, IEEE Transactions on Neural Networks and Learning Systems, 30(7), pp2043-2051, (2019)
 Y.Y. Wang, Z.Y. Zhang, M.K. Xu, Y.F. Yang, M.Y. Ma, H.L. Li, J. Pei, L.P. Shi*, “Self-doping memristors with equivalently synaptic ion dynamics for neuromorphic computing”, ACS Applied Materials & Interfaces, 11(27) pp24230-24240, (2019)
 L. Deng, P. Jiao, J. Pei, Z. W and G. Li*, GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework, Neural Networks, vol.100, pp.49-58, 2018.
 Y. Zhang & W. He, Y. Wu, K. Huang, Y. Shen, J. Su, Y. Wang, Z.Y. Zhang, X.L. Ji, G.Q. Li, H.T. Zhang, S. Song, H.L. Li, L.T. Sun, R. Zhao, L.P. Shi*, “Highly Compact Artificial Memristive Neuron with Low Energy Consumption”, Small, 14(51) pp1802188, (2018)
 G.Q. Li, L. Deng, L. Tian, H. Cui, W. Han, J. Pei, L.P. Shi*, “Training deep neural networks with discrete state transition”, Neurocomputing, 272, 154-162, (2018)
 Y. Wu, L. Deng, G. Li J. Zhu, L. Shi*, Spatio-temporal Backpropagation for Training High-performance Spiking Neural Networks,Frontiers in Neuroscience, 12, 331, 2018.
会议论文 Conference Papers
 Y. Wu, L. Deng, G. Li, J. Zhu and L.Shi*, Direct Training Spiking Neural Networks: Faster, Larger and Better, Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019).
 L. Liu, L. Deng, X. Hu, M. Zhu, G. Li, Y. Ding and Y Xie*, Dynamic Sparse Graph for Efficient Deep Learning,International Conference on Learning Representations (ICLR 2019).
 S. Wu, G.R. Wang, P. Tang, F. Chen, L.P. Shi*, “Convolution with even-sized kernels and symmetric padding”, Conference on Neural Information Processing Systems (NeurIPS) (2019)
 P. Wang, X.Xie, L. Deng, G. Li, D. Wang and Y. Xie*, HiNet: Hybrid Ternary Recurrent Neural Networks, Thirty-second Annual Conference on Neural Information Processing Systems (NeurIPS 2018).
 S. Wu, G. Li C. Feng and L.Shi*, Training and Inference with Integers in Deep Neural Networks, International Conference on Learning Representations (ICLR 2018).