iEDGE Symposium General Chairs
Feras Awaysheh, Tartu University
Kaoutar El Maghraoui, IBM Research, Yorktown
Yingjie Wang, Yantai University
Technical Program Committee
Victor R. Kebande, Blekinge Institute of Technology
Fahed Alkhabbas, Malmo University
Addi Ait-Mlouk, University of Skövde
Florian Schmidt, Technische Universität Berlin
William Lindskog, Technical University of Munich
Sadi Alawadi, Blekinge Tekniska Högskola
Anna Kobusińska, Poznań University of Technology
Flavia Delicato, Fluminense Federal University
Important Dates
Open for new submissions of invited papers: February 8, 2024
UPDATED:
Due for new submissions of invited papers: April 27, 2024
Acceptance notification: May 20, 2024
CONTACT
For any questions regarding submission of invited papers, please email:
Feras M. Awaysheh, University of Tartu, Estonia [email protected]
Kaoutar El Maghraoui, IBM Research, Yorktown [email protected]
IEDGE – FOUNDATION MODELS AT THE EDGE (IEDGE2024)
The IEEE Symposium on Intelligent Edge Computing and Communications (iEDGE 2024) will take place in Shenzhen, China, from July 7-13, 2024, as part of the IEEE World Congress on Services (SERVICES). This year’s focus at iEDGE2024 will be on the pivotal role of foundation models in the era of intelligent edge computing. The symposium aims to demonstrate how these cutting-edge AI models can elevate the efficiency and intelligence of edge computing, thus enabling more sophisticated and context-aware edge-enabled services. By spotlighting foundation models, iEDGE2024 aims to explore the potential of enhancing edge security, resilience, and the seamless edge-to-cloud continuum, catalyzing the next evolution of service computing globally.
iEDGE2024 seeks to harness recent advancements in foundation models’ generalizability and advanced capabilities while also addressing the imperative requirements of edge performance, latency, and decentralization. Additionally, the symposium aims to serve as a platform for researchers, practitioners, and advocates of edge intelligence to converge and exchange insights on effectively managing edge/fog deployment architectures. Furthermore, leveraging the capabilities of the edge is indispensable for exploring advancements in foundation model operations. In this context, the edge plays a crucial role in bringing inferencing closer to the data source, thereby reducing latency and bandwidth demands by processing data locally on edge devices. This approach holds particular significance for applications necessitating real-time or low-latency responses, such as autonomous vehicles, IoT devices, and industrial automation.
Moreover, Edge-AI facilitates distributed training and fine-tuning of models, enabling the creation of personalized and adaptive models tailored to specific edge environments or user preferences without transferring substantial amounts of data to centralized servers. Additionally, edge computing platforms streamline foundation models’ deployment, management, and scaling across distributed edge infrastructure. The deployment process is streamlined through orchestration frameworks and tools, ensuring resource efficiency and automating tasks such as load balancing, fault tolerance, and version control of models deployed at the edge.
iEDGE2024 vision entails advancing and practically applying foundation models in Edge-AI services. The symposium places significant emphasis on training and inference optimization techniques aimed at substantially enhancing the speed and efficiency of these models for edge deployment while also facilitating sophisticated on-device learning.
This ambitious objective will be realized by assembling leading researchers actively engaged in the following areas:
- The exploration of quantization strategies, aimed at effectively reducing the size of AI models to better fit the unique constraints of edge devices. Additionally, the symposium will address the critical need for parameter-efficient fine-tuning, enabling large-scale models to adapt to specific tasks while minimizing the requirement for extensive computational resources.
- The study and application of Model Distillation, which seeks to streamline complex models to be compatible with the limitations of edge computing, without compromising their effectiveness. The symposium will also delve into the growing importance of Hardware Acceleration, examining how the integration of specialized hardware can dramatically boost AI performance at the edge.
- A focus on Energy-Efficient Inferencing Architectures, highlighting the necessity for sustainable, power-aware solutions in edge computing environments.
- An in-depth exploration of Federated Learning (FL) and its pivotal role in harmonizing the expansive capabilities of large-scale models with the imperative of safeguarding data privacy, achieved through the distributed training of models across multiple nodes.
TOPICS OF INTEREST (INCLUDES BUT NOT LIMITED TO):
- Multilingual Foundation Model at the Edge
- Foundation Models for Generative Edge-AI
- Edge for Foundation Models
- Foundation Models for FL Optimization
- Foundation Models for Horizontal/Vertical FL
- Efficient Feature Selection for FL
- Resilience and Reliability in Edge-AI
- Security and Responsibility in Edge-AI
- Trust and Privacy in Edge-AI
- Foundation Model in Edge-to-Cloud Continuum
- AI/ML for Edge and Edge for AI/ML
- Decentralized Foundation Models at the edge
- Zero-shot learning at the edge
- Prompt engineering for edge Foundation Model
- Data-centric Edge-AI
- Quantum Edge Intelligence
- 5G/6G and Wireless Communication for Edge-AI
- Green AI: Sustainable Practices in Edge Computing
- Edge-aware Neural Architecture Search
- Model compression