A week seemingly doesn’t go by without news of semiconductor giants like Nvidia and Intel nudging the frontier of AI chips and what they are capable of. IBM, despite helping to define modern AI with its Watson system, is not often mentioned in the same breath.

This week, however, IBM stepped up to the plate with a new processor, IBM Telum, equipped with on-chip acceleration for AI inferencing for anti-fraud applications in banking and other sectors. Telum, announced at this week’s Hot Chip conference, is “designed to bring deep learning inference to enterprise workloads to help address fraud in real-time” while a transaction is taking place, according to an IBM press release.

The company said Telum, which spent three years in development, is the first IBM chip with technology created by the IBM Research AI Hardware Center. The first Telum-based system is expected to emerge in the first half of next year.

iStock-466513596.jpg

“The chip contains 8 processor cores with a deep super-scalar out-of-order instruction pipeline, running with more than 5 GHz clock frequency, optimized for the demands of heterogenous enterprise class workloads,” IBM stated. “The completely redesigned cache and chip-interconnection infrastructure provides 32MB cache per core, and can scale to 32 Telum chips. The dual-chip module design contains 22 billion transistors and 19 miles of wire on 17 metal layers.”

IBM said it partnered with Samsung to develop the Telum processor in Samsung’s 7nm EUV technology node. The new processor may be of most interest to IBM’s own mainframe customers in sectors such as banking, finance, trading and insurance, who are looking for an AI inferencing boost, and is not necessarily a threat to other companies producing AI chips.

“IBM has a major effort underway to promote AI as a workload accelerator in many of their prime verticals (e.g., banking, finance, insurance),” said Jack Gold, principal analyst at J. Gold Associates, via email. “They are promoting Watson heavily in many of these areas. But many of the IBM customers have workloads that they want to run on their own systems (not just as a service with Watson or in an IBM cloud). IBM still sells a significant number of servers to its customer base, and by building an accelerator for AI, they make their mainframes more AI capable.”

The inference market is where AI and machine learning are being deployed at scale to serve enterprise workloads, Gold said, but while Nvidia plays in the inference market to some degree, that company is more focused on the “high-end, heavy-duty [AI] training space. The inference is actually where Intel has a major presence with their Xeon and FPGA accelerators products.”

Ultimately, all the major chip firms, and even the cloud providers such as AWS, Google and Microsoft Azure, have their own AI chip accelerator efforts underway. “IBM is just building one that fits better with their own model of what AI inference should be than buying one off the shelf,” Gold said.

Sourced from Fierce Electronics - written by Dan O'Shea

Comment