WrapUp NeIC2015 - seeds planted
Yesterday the Nordic e-Infrastructure Collaboration Conference (NeIC2015) came to an end.
I talked about QNIBTerminal and what I am working on; connecting dots between metrics (graphite-ecosystem), logs (logstash & friends), inventory (QNIBInventory based on a GraphDB) and SLURM (cluster resource scheduler). I put it up on youtube:
Containerization eats Configuration Management?
Last week I was invited to introduce Docker at the Hamburg Ansible-Meetup and kick of some thoughts about the intersection with Configuration Management.
The presentation could be found below, the introduction part should be known by now. I would like to dive a little bit deeper into how this might change Configuration Management.
insideHPC Interview about 'Containerized MPI Workloads'
As an aftermath of the 'HPC Advisory Council China Workshop'. Rich invited me to have an interview via Skype about the very same topic.
Apart from the fact that it's always a pleasure to talk to HPC enthusasts like Rich, it was a perfect oportunity to record the slides, since I failed to operate the GoPro and my MacBook Pro propperly. IMHO the recording was even better then the original. For starters I added a MPI Microbenchmark, which provides a nice bare MPI flavor.
HPCAC China 2014: 'Containerized MPI workloads'
On my way back from the 'HPC Advisory Council (HPCAC) China Workshop 2014' it is about time to wrap up my (rather short) trip.
I was presenting my follow-up on docker in HPC. At the ISC14 this summer I talked about the HPC cluster stack side; thus, how to encapsulate the different parts of the cluster stack to shift to a more commoditized one.
As I was interviewed by Rich about this he was continiously asking how this will impact the compute virtualization. My mockup was spawning some compute nodes, but they are not distributed, but sitting ontop of one (pretty) oversubscribed node. Running real workloads was not my intention...
Long story short: 'Challange accepted' was what I was thinking.
ISC14 - Interview: Overlay HPC cluster stack information
At the ISC14 Christian had an interview with Rich Brueckner from insideHPC about his QNIBTerminal BoF-Session. Slides of the talk could be found in this post.
ISC14 - BoF: Overlay HPC cluster stack information
At ISC14 I gave a Birds-of-the-Feather talk about the benefits provided by overlaying multiple information layers within the HPC cluster stack. The topic debuted at OSDC14 (post with video here). Furthermore I had an video-taped interview with Rich Brueckner from insideHPC, which is available here.
OSDC2014 my way to QNIBTerminal - Virtual HPC
On my way home (at least to an intermediate stop at my mothers) from the OSDC2014 I guess it's time to recap the last couple of weeks.
I gave a talk which title reads 'Understand your data-center by overlaying multiple information layers'. The pain-point I had in mind when I submitted the talk was my SysOps days debugging an InfiniBand problem that was connected to other layers of the stack we were dealing with. After being frustrated about it I choose to use my BSc-thesis to tackle this problem. The outcome was a not-scaling OpenSM plug-in to monitor InfiniBand. :) But the basics were not as bad, so I revisited the topic with some state-of-the-art log management (logstash) and performance measurement (graphite) experience I gained over the last couple of month. Et voila, it scales better...
ISC13 - BoF: Does Supercomputing #MonitoringSucks?
In 2013 Christian was at the ISC13 conference in Leipzig to talk about the current state of monitoring in regards of an HPC system (event detail).