Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/29969
Title: Privacy-Preserving Online Content Moderation: A Federated Learning Use Case
Authors: Leonidou, Pantelitsa 
Kourtellis, Nicolas 
Salamanos, Nikos 
Sirivianos, Michael 
Major Field of Science: Engineering and Technology
Field Category: Mechanical Engineering
Keywords: Content moderation;Federated learning;Privacy
Issue Date: 30-Apr-2023
Source: ACM Web Conference - Companion of the World Wide Web Conference, 2023, 30-4 April, pp. 280 - 289
Start page: 280
End page: 289
Conference: ACM Web Conference 2023 - Companion of the World Wide Web Conference 
Abstract: Users are exposed to a large volume of harmful content that appears daily on various social network platforms. One solution to users' protection is developing online moderation tools using Machine Learning (ML) techniques for automatic detection or content filtering. On the other hand, the processing of user data requires compliance with privacy policies. In this paper, we propose a framework for developing content moderation tools in a privacy-preserving manner where sensitive information stays on the users' device. For this purpose, we apply Differentially Private Federated Learning (DP-FL), where the training of ML models is performed locally on the users' devices, and only the model updates are shared with a central entity. To demonstrate the utility of our approach, we simulate harmful text classification on Twitter data in a distributed FL fashion- but the overall concept can be generalized to other types of misbehavior, data, and platforms. We show that the performance of the proposed FL framework can be close to the centralized approach - for both the DP-FL and non-DP FL. Moreover, it has a high performance even if a small number of clients (each with a small number of tweets) are available for the FL training. When reducing the number of clients (from fifty to ten) or the tweets per client (from 1K to 100), the classifier can still achieve AUC. Furthermore, we extend the evaluation to four other Twitter datasets that capture different types of user misbehavior and still obtain a promising performance (61% - 80% AUC). Finally, we explore the overhead on the users' devices during the FL training phase and show that the local training does not introduce excessive CPU utilization and memory consumption overhead.
URI: https://hdl.handle.net/20.500.14279/29969
ISBN: 9781450394161
DOI: 10.1145/3543873.3587604
Rights: © Copyright held by the owner/author(s)
Type: Article
Affiliation : Cyprus University of Technology 
Telefonica Research 
Appears in Collections:Άρθρα/Articles

CORE Recommender
Show full item record

Page view(s)

194
Last Week
2
Last month
6
checked on Dec 3, 2024

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons