We had the opportunity to sit down for an interview with Malini Bhandaru, Ph.D., Open Source Lead - ML, IoT & Edge at VMware and Ramya Ravichandar, Ph.D., Vice President Of Product Management, Sustainability at JLL towards our 3rd Edge and AI summit next week
Ramya Ravichandar is responsible for building, creating and scaling disruptive clean-tech products and solutions. She has over a decade of experience launching new products using AI, Edge and IOT. She is currently fueled by a sense of urgency to deliver technology products that help meet our global sustainability goals. Ramya has a PhD in Computer Science from Virginia Tech.
Malini Bhandaru leads open-source machine learning efforts at VMware and is currently involved with ONNX and Kubeflow. In the past she has worked on IoT/Edge and Cloud open-source projects such as EdgeX Foundry, OpenStack, and OpenDaylight, as a developer and lead. She has had the opportunity to architect Intel Xeon power and performance features, develop faster cryptography implementations, speech-to-text, remote monitoring and management solutions and early eCommerce solutions. She has over 20 patents granted and/or applied and a Ph.D. in Machine Learning from the University of Massachusetts. She is also a STEM coach, child advocacy volunteer, career mentor, and avid gardener.
Hi, thanks for joining us today, could you briefly explain what you mean by the edge?
Malini: For me the edge is any processing resources closer to the data source and/or consumption point to meet low response latency or data privacy requirements. It could be a tiny edge or much larger, it could be running workloads that control your home or monitor a person’s health or a factory floor or a retail store or run a collection of diverse applications.
Ramya: We view the edge as being the closest compute to the source of data generation. The intent is to utilize the compute to parse through high-frequency data streams by applying intelligent algorithms.
What applications/use cases are you seeing for AI at the edge? Any industries more dominant than others in this area?
Ramya: We are seeing more and more powerful edge devices getting deployed to avoid communication latency, especially around mission-critical applications. Any application that demands real-time responses and actions is an ideal candidate for AI at the edge. These span autonomous vehicles to real-time quality inspection in process manufacturing.
Malini: IoT/Edge is already here and several applications have made their debut from surveillance to health care to connected cars and more. With Covid-19 we saw edge applications that used AI/ML to detect if people were wearing masks and maintaining social distancing. As 5G rolls out we will see more real-time low latency applications from monitoring and managing smart grids with renewable energy sources.
What are the benefits of bringing some of the cloud functions to the edge as part of the application? How would it help AI related applications?
Malini: For any supervised learning, data collected at the edge would still need to be brought to human labelers. But unsupervised and reinforcement learning (RL) could be pushed out to the edge where the data is gathered, saving network bandwidth and meeting any mandated data privacy regulations. For instance, assume you have a retail edge with an application that provides targeted advertisements to the customer based on camera input - cart contents, gaze location and /or gauged age and gender. The RL reward function might be number of recommended items purchased. Over time the models built would be relevant to the clientele at that store location and season.
Ramya: Outside the obvious benefits of reduced latency, decreased bandwidth and storage needs, edge applications increase the reliability of AI applications. They provide the confidence of having analyzed every single data point, and then converging on a decision. On the other hand, when the same AI application is delegated to the cloud, there is an increased likelihood of downsampling, especially with regards to streams that are bandwidth intensive. Consider a simple use case of detecting a rogue agent where one can weigh the efficacy of running a face recognition algorithm on every camera, versus sending snippets of video streams to the cloud from the hundreds of cameras in the city.
And finally, are you seeing security concerns with AI, and if so, how do you see it being resolved?
Ramya: The use of AI opens up new avenues of security issues. AI models can be manipulated through their inputs, and these could be commandeered by malicious actors to reflect a different reality. Additionally, the devices are more connected, and that immediately opens up new opportunities for infiltration. As with any new technology, security issues must be anticipated and addressed as an inherent part of the product design.
Malini: ML models are vulnerable just like software – perhaps the original model is replaced by a compromised model, perhaps the environment is altered slightly to confused the models in use. Models themselves, like software may contain bugs, may be incorrect because of biases in training or poor quality training data. Using criteria to determine model quality, model transparency, explainability and in high risk environments multiple models to protect from known weaknesses will help improve trust and security.
Join our 3rd edition of Edge and AI summit next week June 24th