The problem with AI is that because of how complex it can get, it usually requires heavy duty hardware like dedicated servers that does all of the crunching it needs to get done. This also means that from the end-user perspective, we would need an internet connection in order to access these AI, like ChatGPT, Bard, and so on.
That could change in the future, and Qualcomm has demonstrated that it is entirely possible for large and complex AI to run locally, even on a mid-range Android device. During the annual IEEE/CVF Conference on Computer Vision and Pattern Recognition, Qualcomm showcased a demo which saw them run ControlNet, a 1.5B parameter image-to-image language-vision model, locally on an Android device.
What’s surprising and impressive about this demo is that no internet connection is required for this to work. The company demonstrated how they fed the AI an image prompt and within 12 seconds, managed to create an image based on those prompts, which is actually crazy fast compared to competing AI systems.
Keep in mind that this is just a demo so obviously real-world usage might be a different story, but assuming it actually works as advertised, it could change the way AI is implemented, especially on our phones or even computers. By processing all the data locally, it ensures privacy because none of it goes to the cloud, which could be hacked or intercepted.
It also means that it could cut down on the need to have dedicated server farms that are running 24/7 to process all the AI requests. At the moment, ControlNet isn’t available to the public but based on Qualcomm’s demo, it is rather exciting and we can’t wait to see how it could be integrated into our phones and turn them into truly “smart” phones.
The post Qualcomm’s latest tech demo could change how AI is implemented on our phones – Phandroid first appeared on phandroid.com