[cs615asa] HW#N: Attend a relevant Meetup/Talk/community event

Robert Sokolov rsokolov at stevens.edu
Sun Apr 17 22:00:32 EDT 2016

Hi All,

I attended Red Hat's Monthly User Group Meeting, at the Red Hat NYC Office
on April 13th. Postings about their meetups can be found here:

I chose this event because Red Hat is a large provider for enterprise Linux
- thus necessary in any system administrators repertoire. Fedora is their
widely known open source and community built project.

At this event we talked about Container Technology. Andrea Arcongeli, from
one of Red Hat's European offices, started off the meet with a presentation
on Virtual Memory and Virtualization in the Linux kernel. He discussed the
latest innovations in virtualization technology and what many other
organizations such as Cisco were using. His talk continued into mainly NUMA
(not uniform memory architecture) and how it is used within the Red Hat OS.
We discussed the basic principles of the architecture: having a system with
multiple nodes, and each node having it's own CPU, memory, and respective
PC/devices. An issue arises though due to all the interconnection between
nodes, and the interconnections themselves are slow.

Once the overview was finished, we dove into NUMA use specifically
within RHEL (Red Hat Enterprise Linux), and how Red Hat is trying to speed
up processing in the kernel. They currently allocate memory from the local
node of the current CPU. This allows for a memory policy to take over if
the process isn't migrated by the scheduler into a CPU in a different NUMA
node later on. So when things like memory getting full in the local machine
happen, memory allocation splits it to other nodes. This is great for
really small short lived tasks such as gcc. We went into various
configurations andl man pages: such as numactl, numastat, along with the
Kernel API for sys_mempolicy, sys_mblind, sys_ched_setaffinity, and
sys_move_pages. Because these effect CPU and RAM , editing the
configuration takes effect immediately. It was interesting to see how this
allows you to turn a setting on/off, monitor and test it without incurring
any outages.

This entire presentation was new information to me, as I have not ventured
too far into the Linux Kernel - especially verbalization. I've dealt with
numerous front end applications, but it was interesting to see their base
structures and inter-workings. This taught me that to optimize things you
could pin memory of a certain process to certain virtual machines, CPUs, or
nodes. Then you're effectively using all the physical memory channels, and
thus more bandwidth throughout the system. This works perfectly with small
tasks, that can be put on a specific node - essentially partitioning the
system. The problem through is that pinning processes removes flexibility
of the system.

Afterwards Patrick Ladd, from Red Hat's NYC office did the main
presentation on containers, and continued in how they can be applied within
Red Hat. He gave us a demo of RHEL, and Atomic - a version of RHEL
optimized for container loads and management. Linux containers are a linux
kernel feature to contain a group of processes in a n independent execution
environment. The linux kernel provides an independent application execution
environment for each container; including independent file systems, network
interfaces, IP addresses, and hardware limits for memory and CPU time. We
went a little into the history of container technology and it's use in
various locations, along with the differences between containers and
virtualization. While virtualization emulates an entire system, OS, kernel,
and application - containers allow you to limit this, with one host OS and
separate spaces for each application. Thus you can have applications with
different conflicting dependencies in their own environment, but not need
the overhead of many copies of the OS.

Technologies that enable this are namespaces, a process network, file
systems (specifically union and overlay), UTS (unix technology services),
and cgroups (control groups). All namespaces are virtualized and can be
passed to a container, including unexpected ones such as the loopback
interface witch most virtualization software struggles to emulate.
Filesystem name spaces allow isiolation of all mount points, not just the
root directory, and allow for attributions to be changed between instances
(such as read only for instance). Furthermore user namespaces allow you to
have root privilege in a container, but simply a basic user in the base OS.
All these when used properly allow you to avoid exposing anything about the
underlying system and create a secure environment.

This also taught me about cgroups, a way to manage the resources of
operating system and hardware. They collect groups of processes together
and control what they're allowed to do with your OS.Similar to having a
virtualized environment with specific RAM, CPU, ect. but in a more dynamic
setting. Furthermore you could use one container as a load balance to
delegate among multiple containers of the same image, using micro-processes
and essentially creating your own internal content distribution network.

It was interesting to see the various approaches to virtualization and each
side's positives and negatives.One interesting thing about Atomic is just
how stripped down it is, and the fact that it has no package installer, no
yum. It doesn't have VIM for example, only VI - so if you wanted to use VIM
on the machine you would have to make a container instance with VIM in it,
and then enter that container rather then using it from the host OS. This
idea leads to the host OS only having few releases and major updates
straight from Red Hat - while images for containers can be updated
constantly. Containers are mean to have the ability to be killed and
restarted on the fly without effecting other processes or system up-time.

Overall this event was extremely interesting and I learned a ton. I think
it was very useful to see the underling technologies behind the software we

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.stevens.edu/pipermail/cs615asa/attachments/20160417/693d6422/attachment.html>

More information about the cs615asa mailing list