In recent years, breakthroughs from the field of deep learning have transformed how sensor data (e.g. images, audio, and even accelerometers and GPS) can be interpreted to extract the high-level information needed by bleeding-edge sensor-driven systems like smartphone apps, wearable devices and driverless cars. Today, the state-of-the-art in computational models that, for example, recognize a face, track user emotions, or monitor physical activities are increasingly based on deep learning principles and algorithms. Unfortunately, deep models typically exert severe demands on local device resources and this conventionally limits their adoption within mobile and embedded platforms. As a result, in far too many cases existing systems process sensor data with machine learning methods that have been superseded by deep learning years ago.
Because the robustness and quality of sensory perception and reasoning is so critical to mobile computing, it is critical for this community to begin the careful study of two core technical questions. First, how should deep learning principles and algorithms be applied to sensor inference problems that are central to this class of computing? This includes a combination of applications of learning, some of which are familiar to other domains (such as image and audio processing), in addition to those more uniquely tied to wearable and mobile systems (e.g. activity recognition and distributed federated learning). Second, what is required for current -- and future -- deep learning innovations to be either simplified or efficiently integrated into a variety of mobile resource-constrained systems? This spans from efficiency-boosting techniques for existing models, up to the design of resouce-efficient deep architectures, and down to novel hardware design for mobile processors for the optimized deployment of deep learning workloads. At heart, this MobiSys 2021 co-located workshop aims to consider these two broad themes; this year we place special focus on the emerging areas of i) resource allocation and scheduling for applying Federated Learning over embedded and mobile devices and ii) Edge-centric Learning that leverages the radical progress in Mobile Edge Computing (MEC) technologies. As such, we particularly encouraga submissions on these two topics. More specific topics of interest include, but are not limited to:
- Resource-efficient Federated and Edge-centric Learning
- Compression of Deep Model Architectures
- Neural-based Approaches for Modeling User Activities and Behavior
- Quantized and Low-precision Neural Networks (including Binary Neural Networks)
- Mobile Vision supported by Convolutional and Deep Networks
- Optimizing Commodity Processors (GPUs, DSPs, NPUs) for Deep Models
- Audio Analysis and Understanding through Recurrent and Deep Architectures
- Hardware Accelerators for Deep Neural Networks
- Distributed Deep Model Training Approaches
- Applications of Deep Neural Networks with Real-time Requirements
- Deep Models of Speech and Dialog Interaction or Mobile Devices
- Partitioned Networks for Improved Cloud and Edge Offloading
- OS Support for Resource Management at Inference Time
-
Keynote Speakers
Kun Wang
UCLA - Paper Submission Deadline:
April 9th - 11:59PM AOE
May 7th - 11:59PM AOE (Final) - Author Notification:
May 24th - WiP and Demo Deadline:
May 7th - 11:59PM AOE - Workshop Event:
June 25th 2021