Electronic Design Automation for Emerging Technologies

Electronic Design Automation for Emerging Technologies

The continued scaling of horizontal and vertical physical features of silicon-based complementary metal-oxide-semiconductor (CMOS) transistors, termed as “More Moore”, has a limited runway and would eventually be replaced with “Beyond CMOS” technologies. There has been a tremendous effort to follow Moore’s law but it is currently approaching atomistic and quantum mechanical physics boundaries. This has led to active research in other non-CMOS technologies such as memristive devices, carbon nanotube field-effect transistors, quantum computing, etc. Several of these technologies have been realized on practical devices with promising gains in yield, integration density, runtime performance, and energy efficiency. Their eventual adoption is largely reliant on the continued research of Electronic Design Automation (EDA) tools catering to these specific technologies. Indeed, some of these technologies present new challenges to the EDA research community, which are being addressed through a series of innovative tools and techniques. In this tutorial, we will particularly cover the two phases of EDA flow, logic synthesis, and technology mapping, for two types of emerging technologies, namely, in-memory computing and quantum computing.

Anupam Chattopadhyay received his B.E. degree from Jadavpur University, India, MSc. from ALaRI, Switzerland, and Ph.D. from RWTH Aachen in 2000, 2002, and 2008 respectively. From 2008 to 2009, he worked as a Member of Consulting Staff in CoWare R&D, Noida, India. From 2010 to 2014, he led the MPSoC Architectures Research Group in RWTH Aachen, Germany as a Junior Professor. Since September 2014, Anupam was appointed as an Assistant Professor in SCSE, NTU, where he got promoted to Associate Professor with Tenure from August 2019. His research interests are in Application-specific architecture, Electronic Design Automation, and Security. Anupam is an Associate Editor of IEEE Embedded Systems Letters and series editor of Springer Book Series on Computer Architecture and Design Methodologies. Anupam received Borcher’s plaque from RWTH Aachen, Germany for outstanding doctoral dissertation in 2008, nomination for the best IP award in the ACM/IEEE DATE Conference 2016 and nomination for the best paper award in the International Conference on VLSI Design 2018 and 2020. He is a fellow of Intercontinental Academia and a senior member of IEEE and ACM.

Important: The participation is free of charge, but registration is required

Registration on “Electronic Design Automation for Emerging Technologies”

For more details and updates on the series of “ACRC Semiconductor Webinars” please follow our newsletters and our website

“Bringing ML to the extreme edge: a story of co-optimizing processor architectures, scheduling and models” by Prof. Marian Verhelst

Deep neural network inference comes with significant computational complexity, making their execution until recently only feasible on power-hungry server or GPU platforms. The recent trend towards real-time embedded neural network processing on edge and extreme edge devices requires a thorough cross-layer optimization. The talk will analyze what impacts NN execution energy and latency. Subsequently, we will present different research lines of Prof. Verhelst’s lab exploiting and jointly optimizing NPU/TPU processor architectures, dataflow schedulers and conditional, quantized neural network models for minimum latency and maximum energy efficiency. This includes precision-scalable fully-digital designs, as well as compute-in-memory processors. Finally, this talk will make a case for more methodological design space exploration in the vast optimization space of embedded NN processors, using the ZigZag framework.

Marian Verhelst is a full professor at the MICAS laboratories of the EE Department of KU Leuven. Her research focuses on embedded machine learning, hardware accelerators, HW-algorithm co-design and low-power edge processing. Before that, she received a PhD from KU Leuven in 2008, was a visiting scholar at the BWRC of UC Berkeley in the summer of 2005, and worked as a research scientist at Intel Labs, Hillsboro OR from 2008 till 2011. Marian is a topic chair of the DATE and ISSCC executive committees, TPC member of VLSI and ESSCIRC  and was the chair of tinyML2021 and TPC co-chair of AICAS2020. Marian is an IEEE SSCS Distinguished Lecturer, was a member of the Young Academy of Belgium, an associate editor for TVLSI, TCAS-II and JSSC and a member of the STEM advisory committee to the Flemish Government. Marian currently holds a prestigious ERC Starting Grant from the European Union, was the laureate of the Royal Academy of Belgium in 2016, and received the André Mischke YAE Prize for Science and Policy in 2021.

Important: The participation is free of charge, but registration is required

/registration-marian-verhelst/

For more details and updates on the series of “ACRC Semiconductor Webinars” please follow our newsletters and our website

“Mixed-Signal Computing for Deep Neural Network Inference” – webinar by Prof. Boris Murmann from Stanford University, USA

Modern deep neural networks (DNNs) require billions of multiply-accumulate operations per inference. Given that these computations require relatively low precision, it is feasible to consider analog arithmetic, which can be more efficient than digital in the low-SNR regime. However, the scale of DNNs favors circuits that leverage dense digital memory, leading to mixed-signal processing schemes for scalable solutions. This presentation will investigate the potential of mixed-signal approaches in the context of modern DNN processor architectures, which are typically limited by data movement and memory access. We will show that dense mixed-signal fabrics offer new degrees of freedom that can help alleviate these bottlenecks. In addition, we will derive asymptotic efficiency limits and highlight the challenges associated with data conversion interfaces (D/A and A/D) as well as programmability. Finally, these findings are extended to in-memory computing approaches (SRAM and RRAM-based) that are bound by similar constraints.

Boris Murmann is a Professor of Electrical Engineering at Stanford University. He joined Stanford in 2004 after completing his Ph.D. degree in electrical engineering at the University of California, Berkeley in 2003. From 1994 to 1997, he was with Neutron Microelectronics, Germany, where he developed low-power and smart-power ASICs in automotive CMOS technology. Since 2004, he has worked as a consultant with numerous Silicon Valley companies. Dr. Murmann’s research interests are in mixed-signal integrated circuit design, with special emphasis on sensor interfaces, data converters and custom circuits for embedded machine learning. In 2008, he was a co-recipient of the Best Student Paper Award at the VLSI Circuits Symposium and a recipient of the Best Invited Paper Award at the IEEE Custom Integrated Circuits Conference (CICC). He received the Agilent Early Career Professor Award in 2009 and the Friedrich Wilhelm Bessel Research Award in 2012. He has served as an Associate Editor of the IEEE Journal of Solid-State Circuits, an AdCom member and Distinguished Lecturer of the IEEE Solid-State Circuits Society, as well as the Data Converter Subcommittee Chair and the Technical Program Chair of the IEEE International Solid-State Circuits Conference (ISSCC). He is the founding faculty co-director of the Stanford SystemX Alliance and the faculty director of Stanford’s System Prototyping Facility (SPF). He is a Fellow of the IEEE.

Please sign up and join us on Monday, August 17, 2020 at 17:00 (Israel Day Time). A link to the Zoom session will be provided after registration.

Important: The participation is free of charge, but registration is required /registration-boris-murmann/

For more details and updates on the series of “ACRC Semiconductor Webinars” please follow our newsletters and our website.