plush
bar

The world is shifting from cloud-based AI servers to AI-ready computers. The aim is to make AI systems faster, more energy-efficient, and more secure—especially when it comes to sending sensitive information to cloud servers. 

In our new article series, we focus on the functionalities of Neural Processing Units (NPUs) and Central Processing Units (CPUs) and explore their role in the future of AI processing. 

Beyond CPUs and NPUs

We're at the gates of a new AI era, where significant shifts in processing are underway. The traditional boundaries of Central Processing Units (CPUs) and Neural Processing Units (NPUs) are being surpassed as we speak. The pace of innovation in AI demands an equally dynamic evolution in the processors that power it, heralding the potential emergence of specialised processors designed to overcome the unique challenges and opportunities of AI's next frontier.

 

The Onset of Specialised Processors

The journey beyond CPUs and NPUs is marked by exploring specialised processors that can offer even greater efficiency and performance in AI applications. These future processors may be tailored to specific AI domains, such as quantum computing for complex problem-solving or bio-inspired computing for learning and adaptation. The ongoing research and development in AI hardware optimisation are driven by the quest to break through the limitations of current technologies, enabling faster, more efficient, and more intelligent AI systems.

 

Research and Development in AI Hardware Optimisation

AI hardware optimisation is busy with activity as researchers and engineers aim to innovate beyond the usual CPUs and NPUs. This includes efforts to reduce energy consumption, increase processing speed, and minimise latency while maintaining or enhancing the accuracy and reliability of AI systems. The development of new materials, circuit designs, and computing paradigms, such as neuromorphic computing, which mimics the neural structures of the human brain, is at the forefront of this research. These advancements promise to usher in a new generation of AI processors that are more powerful and more aligned with the principles of sustainable and responsible technology.

 

An Optimistic Outlook on the Future of AI Processing

The future of AI processing is bright with possibilities.  As we move past CPUs and NPUs, more specialised processors come into play, offering new ways to innovate and use AI. This evolution is not just about achieving higher performance metrics; it's about redefining what is possible in AI, from enhancing everyday technologies to solving some of the world's most pressing challenges.

The ongoing advancements in AI hardware optimisation are showcasing the human spirit of inquiry and invention. With a continued focus on innovation, collaboration, and ethical considerations, the future of AI processing holds the promise of unlocking unprecedented levels of intelligence and capability in machines. As we look forward to this future, it's clear that the journey of AI is only just beginning, and the processors that power it will continue to play a pivotal role in shaping its trajectory.

 

Microsoft’s Vision Takes Shape

AI isn't just talk anymore—it's driving innovation. Microsoft leads the charge, reshaping PCs with its vision of AI-powered computers. The latest iteration of Windows introduces a dedicated Copilot key, signalling a leap towards integrating AI capabilities directly into the fabric of personal computing. Central to this vision is the requirement for NPUs capable of performing at least 40 trillion operations per second, a specification that promises to bring Microsoft Copilot’s functionalities closer to the user, locally, albeit with some reliance on cloud processing.

During Intel’s AI Summit in Taipei, a significant announcement was made, officially outlining the hardware requirements for running Microsoft’s AI model on Windows. Intel, a stalwart advocate for the AI PC category, underscored the importance of these specifications. The move towards local processing of large language models (LLMs) like Microsoft Copilot offers several intrinsic benefits, including reduced latency for quicker response times and a theoretical improvement in privacy. Moreover, this shift aims to alleviate the load on Microsoft’s servers, allowing for more efficient use of resources in training new AI models or providing cloud-based AI services.

 

The Hardware Challenge and the Promise of Local Processing

The ambition to run AI models locally, however, is not without its challenges. The sheer size of models such as GPT-4, which underpins Microsoft Copilot, necessitates a significant amount of computational power and memory. Current estimates place GPT-4 at around 1.7 trillion parameters, a scale that requires innovative solutions to run effectively on personal computing devices.

Intel’s discussions around running “elements” of Copilot locally suggest a strategic move towards utilising smaller, specialised models that can operate within the constraints of existing hardware. This approach is further evidenced by Microsoft’s investment in Mistral AI, a firm specialising in creating compact AI models. Such developments hint at a future where AI PCs can offer a blend of local and cloud processing, leveraging the strengths to deliver enhanced privacy, responsiveness, and reliability in AI applications.

 

Navigating the Present and Future of AI PCs

Despite the positive outlook, AI PCs have limitations, especially in the processing power of NPUs. Microsoft's specifications for AI PCs have yet to be met by most devices on the market, with existing NPUs falling short of the required 40 trillion operations per second. This gap highlights the reliance on GPUs to supplement processing power and meet the demands of running complex AI models.

Looking ahead, advancements in processor technology, such as Qualcomm’s Snapdragon X Elite mobile processors, promise to bridge this gap. These future processors, capable of 45 TOPS, alongside powerful GPUs, pave the way for running substantial AI models entirely on-device. As hardware continues to evolve, the vision of AI PCs capable of offloading more functionality to local devices becomes increasingly feasible, marking a significant milestone in the journey towards truly intelligent personal computing.

Incorporating AI capabilities directly into PCs represents a bold step forward, blending the power of cloud computing with the advantages of local processing. As Microsoft and its partners make their way along the path, the challenges and opportunities of this transition, the promise of AI PCs capable of enhancing privacy, efficiency, and user experience, shines brightly on the horizon, heralding a new era of personal computing.

Contact us for further information on our AI-ready computer offerings.

Have you read the previous article of our series? Click here to read it.

bar