Since I was asked (thanks Dmitry) via mail how to setup QNIBTerminal to run MPI jobs, I created a REAMDE within the qnib/compute repository, but why not put it in a blog post (README.md is Markdown, my blog is Markdown...)?
Last month I was in Lugano presenting the last little study I conducted. The aim of this study was to check if results of an HPC workload depend on the underlying system.
The foundation of QNIBTerminal is an image that holds consul and glues everything together. I used the Easter break to refine my qnib/slurm images - this blog post give a quick intro.
This post first appeared at the Locafox tech Blog, which is not reachable anymore.
At Locafox we are aiming to rule the world, at least the local-commerce part of the internet.
For that we need a solid foundation that enables our developers and operational staff (some say DevOps) to do awesome stuff.
I had consul on my list for some time, but it was just recently that I gave it a spin. And I must admit I am
hooked. It provides a nice set of functionalities that I need to bootstrap...
Let's give a quick ride by starting two containers: server and client
Last week I was invited to introduce Docker at the Hamburg Ansible-Meetup and kick of some thoughts about the intersection with Configuration Management.
The presentation could be found below, the introduction part should be known by now. I would like to dive a little bit deeper into how this might change Configuration Management.
Apart from the fact that it's always a pleasure to talk to HPC enthusasts like Rich, it was a perfect oportunity to record the slides,
since I failed to operate the GoPro and my MacBook Pro propperly. IMHO the recording was even better then the original.
For starters I added a MPI Microbenchmark, which provides a nice bare MPI flavor.
Oh boy, I realy need to get out of the Moscovian airport, 15 hours is enough. But...
...at least it gave me some time to spare. After adding some MPI Benchmark results to my recent write-up.
about my talk at HPC Advisory council I assume we all agree that the uglyness of my bash solution is endless.
On my way back from the 'HPC Advisory Council (HPCAC) China Workshop 2014' it is about time to wrap up my (rather short) trip.
I was presenting my follow-up on docker in HPC. At the ISC14 this summer I talked about the HPC cluster stack side; thus,
how to encapsulate the different parts of the cluster stack to shift to a more commoditized one.
As I was interviewed by Rich about this he was continiously asking how this will impact the compute virtualization.
My mockup was spawning some compute nodes, but they are not distributed, but sitting ontop of one (pretty)
oversubscribed node. Running real workloads was not my intention...
Long story short: 'Challange accepted' was what I was thinking.