External Harvard Links

Harvard University

COMPSCI 144R - Networks Design Projects or return to Course Catalog Search

112630 – Section 001   

SchoolDepartmentFaculty
Faculty of Arts and SciencesComputer ScienceH. Kung
TermDay and TimeLocation
Spring 2018-2019  (show academic calendar)MW   3:00 p.m. - 4:15 p.m.Maxwell Dworkin 119 (SEAS)
Credits
4  (show credit conversion for other schools)
Credit Level
Graduate and Undergraduate

Description
Deep neural networks (DNNs) are becoming a popular tool in data-driven applications. One of the next frontiers is distributed DNNs over computer networks for improved scaling (e.g., for scaling training as in federated learning) and parallel DNNs over processor arrays for low-latency inference in real-time applications. To this end, there is a need to understand issues such as communication, computation, and accuracy trade-offs. This research-oriented course will address this relatively new, yet rapidly advancing, topic. We will survey the main approaches, with a unique focus on the interplay between deep learning models, parallel and distributed computing architectures, and the hardware structures of end devices. The class will be organized into the following eight modules:Motivations for parallel and distributed deep learning; Parallelism available in deep neural networks; Review of background concepts in deep learning, computer networks, computer architectures, and FPGA/ASIC hardware accelerators; Deep dive case studies in parallel and distributed training and inference (e.g., distributed federated learning and quantized low-latency and energy-efficient inference); Full-stack design optimization for inference in which deep learning models, computing architectures, and hardware circuits are simultaneously optimized; Collaborative deep learning inference between the cloud, edge, and client machines; Privacy and security protocols, and the novel use of blockchains in support of parallel and distributed deep learning; and Emerging technologies in deep learning such as automated neural architecture search and neuromorphic computing. Students working in 2- or 3-person teams will do a substantial project in these and other related areas.

Prerequisite(s)
Recommended: Basic courses in networking (e.g., CS 143) and AI (e.g., CS 181 or 182), programming experience (e.g., CS 51), some research experience, and a strong interest in the subject matter. Lab sessions will be provided to give extra support.

Notes
Preference given to upper-class undergraduates or graduate students in computer science or in business.

Exam Group
FAS08_F

 
Cross Registration
Eligible for cross-registration
With permission of instructor/subject to availability

MIT students please cross register from MIT's Add/Drop application.

opp