PassiveLogic sets AI energy efficiency record with Differentiable Swift

PassiveLogic has announced that its Differentiable Swift compiler toolchain has established the industry record for AI energy efficiency. This achievement unlockes potential for AI applications across sectors. This enables edge-based robotics and addresses AI’s growing climate impact.

PassiveLogic’s work to advance Differentiable Swift has set a new energy efficiency precedent, surpassing Google’s TensorFlow and Meta’s PyTorch. It is 992x more efficient than TensorFlow and 4,948x more efficient than PyTorch.

AI is the defining technology of this decade. With that, the energy required to power its computational infrastructure has grown exponentially. The energy intensiveness of AI models produced by today’s compilers not only impacts the climate but also impedes technological advancements that require battery power or small edge processors in mobile, robotic,and autonomous applications. Energy-efficient AI models help solve both the energy consumption and climate impact problem while simultaneously enabling next-generation edge applications.

AI efficiency is measured by the amount of energy consumed per compute operation. Here, it is denoted in Joules per gigaOperations (J/GOps). PassiveLogic’s optimisations to Differentiable Swift equated to Swift consuming a mere 34 J/GOps, while TensorFlow consumed 33,713 J/GOps and PyTorch 168,245 J/GOps — as benchmarked on NVIDIA’s Jetson Orin processor. Details about the benchmark are available in PassiveLogic’s article and open-source documentation on PassiveLogic’s GitHub.

PassiveLogic has enabled the first general-purpose AI compiler with world-class support for automatic differentiation — the technology that powers deep learning. By using Swift’s static analysis and efficient optimization a priori, the compiler generates highly compact AI models that consume dramatically less energy without sacrificing quality. Because Swift is a general-purpose systems language, PassiveLogic has enabled the merging of AI and application code into a single paradigm. This greatly accelerates the development process, allowing researchers to build new AI technologies unbound from the narrow lens of existing AI toolchains.

“Our work on Differentiable Swift opens the door for new AI frontiers. The energy demands of AI training have artificially bifurcated the AI world into runtime inferencing and backroom training – blocking customers’ applications from getting smarter at the edge,” said Troy Harvey, the CEO of PassiveLogic. “By slashing compiler energy consumption by over 99% for novel AI models that don’t conform to the current deep learning orthodoxy, we’re paving the way for countless new AI use-cases that were previously impractical — be it physics, ecology, or economics.” He continued, “Our innovation on these technical challenges was borne from a clear customer need for AI that enables more kinds of compute for new applications. This is more than just a technological advancement; it catalyses innovation and sustainability.”

PassiveLogic’s advancements in Differentiable Swift are the result of collaboration with the Swift Core Team and ongoing work with the open-source Swift community. As a collaborator in the Swift language, the PassiveLogic team has submitted thousands of commits and provided 33 patches and feature merges since August 2023.

Using more efficient AI compute promotes continued development and exploration while also addressing growing concerns about AI’s energy consumption. Though PassiveLogic’s compiler advancements are general purpose, the company is first applying them to logistics, simulations and autonomous infrastructural robots such as buildings and factories.

Comment on this article via X: @IoTNow_ and visit our homepage IoT Now