Skip to content
All posts

The Role of AI in Confidential Computing

Today more companies are providing global employees with remote access than ever, but with the steep rise in ransomware attacks on cloud environments due to COVID and outdated trust policies, many organizations’ data remains at risk. While most cloud providers today offer encryption services for protecting data when it’s at rest (stored) and in transit (processing), cloud environments remain highly vulnerable. 

Before data can be processed by an application, it is unencrypted in memory leaving its contents vulnerable just before, during and after runtime. Vulnerabilities include memory dumps, root user compromises, and other exploits, such as internal bad actors. When confidential computing is combined with storage encryption, network encryption, and a proper Hardware Security Module (HSM) for key management, it has the ability to provide robust end-to-end data security in the cloud.

What is Confidential Computing

Confidential computing is cloud computing technology that is designed to ensure that encrypted data in the cloud remains safe, confidential and easily accessible. Confidential computing eliminates the remaining vulnerabilities created by loosely-defined security protocols and weak encryption, providing organizations with greater protection. By isolating sensitive data as it’s being processed, confidential computing solves a host of security issues.

Vulnerabilities in Confidential Computing

Confidential computing works by relying on a hardware-based trusted execution environment (TEE), or secure enclaves within a CPU. Today, most confidential computing environments rely on TLS 1.3 standard encryption to encrypt data when it’s in transfer - essentially creating a metaphoric encryption tunnel. Generally speaking, there are two methods to creating a confidential computing environment. While each has their benefits, they also have their downsides. 

The first approach leverages Fully Homomorphic Encryption (FHE) processes, allowing organizations to execute and run applications while they’re encrypted. Since most data theft occurs while data is being temporarily decrypted or stored as plain text, homomorphic encryption allows anyone to perform operations on data without the need to first decrypt it. But while FHE protects data from exploitation while in use and in the cloud, it fails to protect applications that require high performance processing, or Accelerated Processing, such as AI-based applications.

The second approach would be for organizations to leverage a method which uses hardware-based memory encryption to isolate specific application code and data in memory. This can be done using technology like Intel’s Software Guard Extensions, which is a set of security-related instruction codes that are built into some modern Intel central processing units (CPUs). But there are still major problems with this method, as Intel’s SGX is built on standard CPUs which remain highly vulnerable to vector attacks and have major limitations when it comes to running AI-based applications.

The Risks of Confidential Computing & Artificial Intelligence

As we discussed, today’s standard GPUs are designed to help process and deploy AI but are not optimized for security and are highly vulnerable to attack. A cyberattack targeting the proprietary algorithms of an application can be detrimental to a company’s lifespan, especially if the data is a critical part of a company’s IP. Unlike security vulnerabilities found in traditional systems, the root cause of security weaknesses in Machine Learning (ML) systems stems from the lack of explainability of AI systems. 

This lack of explainability leaves openings that can be exploited by adversarial machine learning methods, including evasion, poisoning, and backdoor attacks. An attack of this magnitude can be designed and executed in a number of ways, including stealing the algorithm itself, changing an algorithm’s data output, or injecting malicious data during the training stage of an algorithm in order to affect the inference of the model. Attackers may also work to implant backdoors in models or extract training data from query results.

Another kind of attack targeting AI applications is an APT attack where the bad actor’s intention is not necessarily to be seen within a protected environment. This kind of exploit can be the attack vector of choice for hackers for a few critical reasons, as it allows attackers to learn a system’s internal architecture, sniff out targeted data, or manipulate administrative roles. Additionally, while AI is in a ‘black box’ or protected environment, it is not possible for security teams to run antivirus software. 

Regulatory Data Compliance

An added component to all of this is the challenge organizations face when protecting internal company data. As mentioned, a data breach of this kind can be detrimental to a company, for both financial and reputational reasons. Today, many countries have data regulatory policies which impact the use of AI and the management of user data, such as GDPR laws in the EU and HIPPA laws in the US. 

These laws clearly outline the consequences for companies who fail to comply with their outlined standard of data protection, and if data is compromised, organizations are often liable to lawsuits amounting to exorbitant legal fees. Considering that ransomware has increasingly become the attack malware of choice for many bad actors in 2020, many companies may also have to pay up a hefty fee in order to receive their stolen data back.

Data breaches are more ubiquitous and damaging than you may think. Research shows that while many companies understand the importance of data regulation, most fail to properly comply with compliance standards. For example, in January 2020 a customer support database holding over 280 million Microsoft customer records was left exposed on the web. Another attack on T-mobile in March 2020 allowed hackers to access a number of customers’ sensitive information through a T‑Mobile employee email account. Both companies are now facing lawsuits over their mishandlings, as well as investing in mitigating the reputational fallouts created by these kinds of attacks.

Conclusion

So what can companies do to protect their application data? It’s critical now more than ever for organizations to leverage security solutions that provide a secure environment that enable them to comply with regulation without compromising the performance of their AI/technology. As mentioned earlier on, when confidential computing is combined with the right tools, such as a resilient Hardware Security Module (HSM), it can provide robust end-to-end data security in the cloud for both AI and non-AI applications.

Learn More About HUB Security Confidential Computing Solution