Hubsecurity Blog

Speaker Spotlight with AI Lead Architect %%sep%% %%sitename%%

Written by Andrey Iaremenko | May 2, 2021 10:00:00 PM

This week we’re talking federated learning including its development and security risks. In honor of our event Federated Learning, AI & Data Security, we sat down with Mr. Abdul Rahman Sattar, Lead Architect of Cybersecurity Analytics at TELUS to get some insights on the state of security when it comes to AI/ML federate learning.

[embed]https://youtu.be/PL39scRsxvE[/embed]

To get started, can you briefly explain how federated AI works?

In a traditional machine learning pipeline, logs are uploaded to a central server, which most likely lives in the cloud, by the end devices where the server then does data cleansing and trains a machine learning model. 

At the inference stage, the end devices then reach out to the central server to get prediction results. This classical setup has quite a few issues with it including data privacy issues and scalability.

With federated learning we flip the problem on its head and instead of taking the “data to the code” we bring “code to the data”. The end device participates in the model learning process with the help of a central parameter server. 

Each end device in the federation trains the model locally and uploads its local model to a central parameter server where all the local models are aggregated and the updated model is then pushed back to the end devices for further training and this happens in a loop until convergence. At the end of the training stage each device would have the local copy of the global model residing locally.

When would you use Federated AI vs Central AI? 

In a B2C scenario, federated learning is applicable when data is highly siloed and is owned by the consumer devices e.g. user mobile devices or IoT devices. In most of these cases, the data could have Personally Identifiable Information (PII) and so there are privacy risks and concerns around uploading it to a 3rd party cloud server. 

In this scenario, federated learning gives you some privacy protection since during the model training and inference stage as data never leaves the device, only model parameters are uploaded to the parameter server. However, information leakage is still possible even in this setting so for greater privacy one would use confidential computing and privacy preserving techniques like secure multiparty computation, differential privacy, homomorphic encryption or cryptographic hardware techniques like Secure Enclaves for ensuring confidentiality and privacy. 

In a B2B setting, federated learning lets the business entity have the competitive advantage by retaining its data, which is its core asset, while still being able to benefit from the information of its peers by training its ML models with its data and its peers’ data.

Is the effort to develop Federated AI models similar to central AI?

Implementing Federated AI is more challenging compared to central AI. The learning has to scale to potentially millions of end devices. Additionally one has to make sure that the model learning process does not interfere with the normal usage of the end device, therefore the end devices might drop out of the “federation” if they become busy. 

In these cases, the learning process should still be able to continue even if some of these devices drop out. However, luckily for the data scientists the federated learning runtime can take care of most these challenges and can do the heavy lifting while they can just focus on defining the models and the business logic.

Also, federated learning does take longer to converge compared to central AI approaches but the positives that it affords in terms of better privacy and streamlined network usage outweighs the negatives.

What information is passed between federated AI components?

At a high level, in a vanilla federated learning setup, during the learning process the end devices will exchange model weights or gradients with a central parameter server. During the inference stage, since the model lives on the end device, no information is passed between the federated AI components.

What kinds of threats exist with Federated AI?

Even though the vanilla federated learning setup does offer some privacy, since the data never leaves the end device during the training process, information leakage is still possible by observing the model parameters exchanged between the end device and the parameter server. 

For instance, there could be various inference attacks including attribute inference and membership inference attacks through which the attacker can reconstruct the data on the device by having knowledge of the local machine learning model in the device. 

There are also various poisoning attacks possible during the model training process including data poisoning and model poisoning in which the attacker can do label flipping and change the features of the data (backdoor attack) and send poisoned updates to the server.

How can an organization protect itself against such threats?

To protect against various inference attacks, organizations can use techniques from confidential computing and privacy preserving machine learning including Secure Enclaves, Differential Privacy, Secure Multiparty Computing and Homomorphic Encryption.

To safeguard against data and model poisoning one can use behavioral analysis techniques on the model updates sent by the end devices. If the parameter server has labelled sample data, it can also apply the local model updates to the sample data to detect poisoning attacks.

To protect against the central parameter server being compromised there has been work on federated model training in a peer to peer setup. There has been quite some work on federated model training using blockchain to make the learning process more decentralized and increase transparency and also use it to have an incentive mechanism and reputation scoring system for the trainers.

Can you give an example or two of a federated AI use case?

In cybersecurity federated learning can be used for Network Intrusion and Malware Detection where federated learning can be used for training classifiers and deep autoencoders for anomaly detection.

Google has used federated learning for training its GBoard end user experience by training federated learning models across millions of android devices, to better predict the next word given the context.

A great B2B use case for federated learning would be training classifiers to predict the likelihood of a disease across multiple hospitals without the hospitals ever having to share the private data of its patients.

Abdul Rahman Sattar is the Lead Cybersecurity Analytics Architect at TELUS. He is also leading research with academic leaders on pushing the state-of-the-art in cybersecurity analytics leveraging Federated Machine Learning, Distributed AI and Edge Computing for Connected and Autonomous Vehicle Security. Abdul is the Steering Committee member at multiple Cybersecurity and AI/ML communities including the Toronto Machine Learning Society (TMLS), Automotive Security Research Group (ASRG) and Aggregate Intellect.