As anyone who’s ever tried to work out a restaurant bill, including drinks, taxes, and tip, already knows, some math is difficult. Expand that by several orders of magnitude, and suddenly you’re simulating the effects of a nuclear bomb, or protein folding, or calculating how many oil rigs to send up the Bering Strait before winter, and your needs go beyond mere computers. You need a supercomputer.
Established in the 1960s, supercomputers initially relied on vector processors before changing into the massively parallel machines we see today in the form of Japan’s Fugaku (7,630,848 ARM processor cores producing 442 petaflops) and IBM’s Summit (202,752 POWER9 CPU cores, plus 27,648 Nvidia Tesla V100 GPUs, producing 200 petaflops).
But how did we get to these monsters? And what are we using them…