Intel VP talks AI strategy as company takes on Nvidia

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Intel is on an artificial intelligence (AI) mission that it considers very, very possible.  

The company is the world’s largest semiconductor chip manufacturer by revenue, and is best known for its CPU market dominance, with its familiar “Intel inside” campaign — reminding us all what resided inside our personal computers. However, in an age when AI chips are all the rage, the company finds itself chasing competitors, most notably Nvidia, which has a massive head start in AI processing with its GPUs. 

There are significant benefits to catching up in this space. According to a report, the AI chip market was worth around $8 billion in 2020, but is expected to grow to nearly $200 billion by 2030. 

At Intel’s Vision event in May, the company’s new CEO, Pat Gelsinger, highlighted AI as central to the company’s future products, while predicting that AI’s need for higher performance levels of compute makes it a key driver for Intel’s overall strategy. 


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Gelsinger said he envisioned four superpowers that spur innovation at Intel: Pervasive connectivity, ubiquitous compute, AI and cloud-to-edge infrastructure. 

That requires high-performance hardware-plus-software systems, including in tools and frameworks used to implement end-to-end AI and data pipelines. As a result, Intel’s strategy is “to build a stable of chips and open-source software that covers a broad range of computing needs as AI becomes more prevalent,” a recent Wall Street Journal article noted. 

“Each of these superpowers is impressive on its own, but when they come together, that’s magic,” Geisinger said at the Vision event. “If you’re not applying AI to every one of your business processes, you’re falling behind. We’re seeing this across every industry.” 

It is in that context that VentureBeat spoke recently with Wei Li, vice president and general manager of AI and analytics at Intel. He is responsible for AI and analytics software and hardware acceleration for deep learning, machine learning and big data analytics on Intel CPUs, GPUs, AI accelerators and XPUs with heterogeneous and distributed computing. 

Intel’s software and hardware connection

According to Li, it is Intel’s strong connection between software and hardware that makes the company stand out and ready to compete in the AI space. 

“The biggest problem we’re trying to solve is creating a bridge between data and insights,” he said. “The bridge needs to be wide enough to handle a lot of traffic, and the traffic needs to have speed and not get stuck.” 

That means AI needs software to perform efficiently and fast, with an entire ecosystem that enables data scientists to take large amounts of data and devise solutions, as well as hardware acceleration that provides the capacity to process the data efficiently. 

“On the hardware side, when we add specific acceleration inside hardware, we need to know what we accelerate,” Li said. “So we are doing a lot of co-design, where the software team works with the hardware team very closely.”

The two groups operate almost like a single team, he added, to understand the models, discover performance bottlenecks and to add hardware capacity. 

“It’s an outside-in approach, a tightly integrated co-design, to make sure the hardware is designed the right way,” he said, adding that the original GPU was not designed for AI but happened to have the right amount of compute and bandwidth. Since then, GPUs have evolved. 

“When we design GPUs nowadays, we look at AI as an important workload to drive the GPU design,” he said. “There are specific features inside the GPU that are only for AI. That is the advantage of being in a company where we have both software and hardware teams.” 

Intel’s goal is to scale its AI efforts, said Li, which he maintained is about developing an ecosystem rather than separate solutions. 

“It’s going to be how we lead and nurture an open AI software ecosystem,” he explained. “Intel has always been an open ecosystem that enables competition, which allows Intel’s technologies to get to market more quickly at scale.” 

Intel’s trained AI reference kits increase speed

Historically, Intel has done a lot of work on the software capacity side to get better performance – basically increasing the width of the bridge between data and insights. 

Last month, Intel released trained AI reference kits to the open-source community, which Li said is one of the steps the company is taking to increase the speed of crossing the bridge. 

“Traditionally, AI software was designed for specialists, for the most part,” he said. “But we want to target a much broader set of developers.” 

The AI models in the reference kits were designed, trained, and tested from among thousands of models for specific use cases, while data scientists can customize and fine-tune the model with their own data. 

“You get a combination of ease of use because you’re starting from something almost pre-cooked, plus you get all the optimized software as part of the package so you can get your solution quickly,” Li explained. 

Priorities over the next year 

In the coming year, one of Intel’s biggest AI priorities is on the software side.

“We will be spending more effort focusing on ease of use,” Li said. 

On the hardware side, he added, new products will focus heavily on performance, including the Sapphire Rapids Xeon server processor that will be released in 2023. 

“It’s like a CPU with a GPU embedded inside because of the amount of compute capabilities you have,” said Li. “It’s a game changer to have all the acceleration inside the GPU.” 

In addition, Intel is focusing on the performance of their data center GPU, working with their customer Argonne National Laboratory, which serves their customers and developers. 

Biggest Intel AI challenges

Li said the biggest challenge his team faces is executing on Intel’s AI vision. 

“We really want to make sure we execute well so we can deliver on the right schedule and make sure we run fast,” he said. “We want to have a torrid pace, which is not easy as a big company.” 

However, Li will not blame external factors creating challenges for Intel, such as the economy or inflation. 

“Everybody has headwinds, but I want to make sure we do the best we can with the things we have control over as a team,” he said. “So I’m pretty optimistic, particularly in the AI domain. It’s like it’s back to my graduate student days – you can really think big. Anything is possible.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.