FLNLP workshop

International Workshop on Federated Learning for Natural Language Processing and Text Analysis (FLNLP)

To be held at EAI TRIDENTCOM 2021
Melbourne, Australia.

Workshop Chair

Dr. Yuan Jin, Monash University, Australia

Scope

Nowadays, the availability of large amounts of natural language processing (NLP) and text data, coupled with the development of sophisticated language and text mining models, has led to great research progress in machine understanding and generation of natural language contents. Meanwhile, realistic concerns regarding the privacy and security of the NLP data have been overlooked. These concerns include how to safely collect training data from distributed sources without disclosing its sensitive information, and how to maintain and secure a central repository of the collected data. Traditional approaches tend to be expensive and susceptible to problems like data leakage, misuse and abuse. Federated learning (FL) has emerged as a new promising alternative, which aims to train a global model directly over the distributed data, thereby avoiding the above problems. 

So far, the potentials of federated learning have been witnessed in certain industrial cases, including healthcare, IoT, and computer vision, and in a broad range of research studies, such as personalized, vertical, and decentralized FL, and (improved) model training and privacy preservation for FL. With this being said, FL is still in its infancy for both industry applications and research studies, and its implications and significance to these areas still need to be identified, verified and updated. 

The scope of this workshop focuses on identifying new FL challenges in NLP, and developing novel FL techniques, frameworks and systems for NLP applications.  The workshop’s goal is to lead the first interactions among FL and NLP researchers as well as industry professionals to understand the implications and significance of FL to NLP, and the impacts on both communities, while acknowledging that FL has become an increasingly popular topic in the NLP conferences in recent years.

Topics

FL system designs for NLP applications 

Adversarial attacks on FL and its defense for NLP applications

Personalized FL for NLP applications

Federated representation learning for NLP applications

Language model pre-training and fine-tuning in FL contexts

Communication compression of language models in FL contexts 

Knowledge distillation and transfer learning of language models in FL contexts

Neural architecture search of language models in FL contexts

Federated deep language models

Privacy issues for FL in NLP applications

Privacy-preserving methods for FL in NLP applications

Evaluation frameworks and methodologies for FL in NLP applications

Submit paper to this workshop

Important datesUpdates!

Paper submission: 6 September 2021 (extended!)

Notification deadline: 27 September 2021

Camera Ready: 14 October 2021

Publication
Accepted and presented technical papers will be submitted for publication by Springer and made available through SpringerLink Digital Library. Workshop Papers will be published as a part of the (EAI TRIDENTCOM 2021) Conference Proceedings. Proceedings will be submitted for inclusion in leading indexing services, including Ei Compendex, ISI Web of Science, Scopus, CrossRef, Google Scholar, DBLP, as well as EAI’s own EU Digital Library (EUDL).

Submission
We invite workshop participation through contributions that respond to one or more of the mentioned research questions in 12-15+ pages. Papers should be submitted through EAI ‘Confy+‘ system, and have to comply with the Springer format (see Author’s kit section).

Technical Program Committee of the Workshop

Dr. Longxiang Gao, Deakin University, Australia
Prof. Yong Xiang, Deakin University, Australia