IB20 Newsflash No5 29 May 2018
IB19 Newsflash No4 25 April 2018
IB18 Stateful Stream Processing 05 November 2017
IB17 DockerConEU Newsflash 17 October 2017
IB16 Newscast No3 15 October 2017
IB15 Newscast No2 09 October 2017
IB14 Newscast No1 01 October 2017
IB13 HPCAC Student Cluster Competition w/ Dan 29 September 2017
IB12 High-Performance Commoditization 28 September 2017
This episodes’ guest is Professor DK Panda from the Ohio State University. He is a frequent speaker all around the globe and well known for his work on MPI and speading up non-HPC workloads by applying HPC magic like InfiniBand ibverbs.
We’ll talk about how HPC changed over the course of the last decades and what is DK’s take on it. Furthermore the inclusion of Big Data topics into the HPC ecosystem will be discussed and how this changed the dynamics of HPC clusters and vendors.
This is the original agenda I came up with. We touched on most of the points, the order might vary a bit. :)
- First podcast without Hagen
- Including HPC topics into the podcast
- Who is DK and how he describes himself
- What meant HPC in the mid-80ties?
- Barewolfs Cluster comes up in the 90ties
- virtualization and highly converged infrastructure in the 2000
- HPC today, going back to co-processors
- Expect and embrace that soft- and hardware is brittle
- Hadoop expects 1GbE and commodity, heterogenous hardware, bound to fail.
- As everything it was hardware first, software was for free
- HPC was the spearhead of IT, leading the tech with a centristic view?
- Non-HPC hyperscalers like Google have highly parallele, non-coupled workloads
- As they grow their operations and clusters together with features like mining the pile of informations, they pivot towards the HPC community, demanding more throughput.
- Financial trading is almost HPC today
- Meanwhile HPC vendors and suppliers acknowleged the existance of HP-BigData, targeting it as a new pie, as the HPC pie is only growing slowly
MPI / HPC Interconnects
(no internet in the plane, so I might get the origins wrong :)
- invented to push messages around (?, hence message passing interface)
- by ditching TCP reducing latency and by being relyable (read pricy) well suited for distributed workloads, not commodity non-HPC workloads
- over the years became to create a PGAS using messages
- the interconnects are able to deliver TCP(oIB), so low-hanging fruit to address TCP services
- but the real power is unleased when incorperating (kernel by-passing) ibverbs
- High-Frequency trading needs to deliver message as fast as possible (HPI anyone)
- premise of a lot of software is to assume the network is unreliable, NetEng folk tells me that is just not true anymore.
- CLOS3 networks, forming a ‘virtual switch’ is already here. Delivery almost guaranteed
- Are we seeing the HPC ecosystem pivoting to become mainstream?
- I argue that Linux Containers will kick off the same DIY mentality in HPC as in Software Development
- Or is the mainstream becoming more HPC, as the underlying network is relyable nowadays
- Is event-driven compute (SquareKmArray) the new normal (Kafka)?
- where is tidly-coupled still neccessary?
- can we model HPC/HPBD using hosted functions (serverless, FaaS, however it’s called)