There is an additional challenge if you want to use Solaris VMs. Because there are no prepared Solaris base images (aka boxes
), which you could just download from the web. I assume it is legally not allowed to redistribute Solaris.
Luckily Solaris has already a very powerful technology for handling OS images Unified Archives (UARs)
, which you likely use already for your OS deployment. In this post I will show how you can convert a Unified Archive hands-free in approximately 10 minutes into a Vagrant box
.
Last year Alan Chalmers published how to use the open source software “Packer” to create Vagrant boxes out of Solaris GA installation ISOs. I am building on this great work and extended it to use Unified Archives
. I covered the installation of Solaris with UARs already in a previous article, the following Packer recipe does basically the same, but highly automated.
The method has several advantages:
Unified Archives
First you need a UAR of your golden/master Solaris installation, with your customizations, like packages, patches, SRU, etc.
# archiveadm create /data/gold-image-v1.uar
When you have your archive file, you need to convert it first on Solaris to a boot-able ISO image:
# archiveadm create-media --format iso gold-image-v1.uar Initiating media creation... Preparing build environment... Adding archive content... Image preparation complete Creating ISO image... Finalizing /data/AI_Archive.iso... Cleaning up...
When you have the iso file, check the paths and checksum in solaris-ai.json
.
1 | { |
Then you can execute packer build
. Quickly summarized, Packer boots a Virtualbox VM with the ISO, starts a web server to serve the AI manifest and the SMF sysconfig profile. Simulates keypresses for the special boot command and installs Vagrant specific stuff like Virtualbox Guest Additions and known Vagrant public key.
$ packer build -only=virtualbox-iso solaris-ai.json ... ==> Builds finished. The artifacts of successful builds are: --> virtualbox-iso: VM files in directory: packer-solaris11-ai-virtualbox --> virtualbox-iso: 'virtualbox' provider box: ./builds/virtualbox/solaris11_ai.box
When you execute the command you can lean back and watch how Packer creates your box
. On my notebook with a flash disk, the build was finished in 12 minutes.
Finally your Vagrant Solaris base box is ready to use:
$ vagrant box add builds/virtualbox/solaris11_ai.box
If you like you can further customize the installation in the AI manifest http/manifest.xml
and the SMF sysconfig profile http/profile.xml
. The Packer recipe solaris-ai.json
also contains a configuration for VMware, which should work, however I didn’t have the possibility to test it.
Github: solaris-packer
]]>There is an additional challenge if you want to use Solaris VMs. Because there are no prepared Solaris base images (aka boxes
), which you could just download from the web. I assume it is legally not allowed to redistribute Solaris.
Luckily Solaris has already a very powerful technology for handling OS images Unified Archives (UARs)
, which you likely use already for your OS deployment. In this post I will show how you can convert a Unified Archive hands-free in approximately 10 minutes into a Vagrant box
.
]]>
UserMgr
RAD module. As far as I know it is also the oldest RAD module and the backend for the “Solaris User Manager GUI”.
As the name promises the GUI allows to manage user accounts on Solaris.
It is a Java application which can be installed and started the following way:
# pkg install pkg:/system/management/visual-panels/panel-usermgr # vp usermgr
As Solaris 11.3 offers a REST-API, I wrote a Puppet resource type which uses the same RAD API (UserMgr
) to manage users, called solaris_user
:
# puppet resource solaris_user mzach solaris_user { 'mzach': ensure => 'present', comment => 'Manuel', gid => '10', groups => ['other'], home => '/export/home/mzach/', profiles => ['All'], shell => '/usr/bin/bash', uid => '300', }
Now that Puppet understands the UserMgr
API, you can manage the user accounts via Puppet manifests. For example if you want to add the Authenticated Rights Profile Operator
to the user you can do it with the following manifest:
1 | solaris_user { 'mzach': |
# puppet apply manage-mzach.pp Notice: /Stage[main]/Main/Solaris_user[mzach]/auth_profiles: defined 'auth_profiles' as 'Operator' # puppet resource solaris_user mzach solaris_user { 'mzach': ensure => 'present', auth_profiles => ['Operator'], comment => 'Manuel', gid => '10', groups => ['other'], home => '/export/home/mzach/', profiles => ['All'], shell => '/usr/bin/bash', uid => '300', }
Similar to the previous RAD providers you can activate the debug mode with --debug
, to observe the REST calls:
# puppet resource --debug solaris_user mzach ... Debug: REST API Calling GET: https://127.0.0.1:12303/api/com.oracle.solaris.rad.usermgr/1.0/UserMgr/users/ Debug: REST API response: { "status": "success", "payload": [ { "username": "root", "userID": 0, "groupID": 0, "description": "Super-User", "homeDirectory": "/root", "defaultShell": "/usr/bin/bash", "inactive": -1, "min": -1, ...
The current UserMgr
-API in Solaris 11.3 GA is easy to use, and supports all features of useradd
and usermod
, like the new Authenticated Rights Profiles.
But the current version has also several restrictions and issues. For example The addUser
method requires to set a non-hashed password. As work-around this provider sets a dummy password which should be changed immediately. Check the Readme for details.
All the issues are reported to Oracle, so they are maybe fixed soon.
solaris_user
UserMgr
RAD-API can be found in the man page, see man -s 3rad usermgr
UserMgr
RAD module. As far as I know it is also the oldest RAD module and the backend for the “Solaris User Manager GUI”.
As the name promises the GUI allows to manage user accounts on Solaris.
]]>
But I did not really like the distributed storage configuration. E.g. a database server needs the correct ZFS properties set on the ZFS storage appliance via the web-interface or the custom CLI and also the corresponding NFS mount options in /etc/vfstab
on the database server. Maybe this sounds like no big issue to you, for example, if you are also the admin responsible for the storage appliance, or if you have a perfect collaboration with the storage team. But especially if you want to automate the storage configuration, this distribution adds a significant complexity.
Of course I wanted to manage the configuration with Puppet like a local ZFS filesystem.
I don’t yet have a ZFS SA at work to deal with, but the availability of the new RAD REST interface in Solaris 11.3 motivated me to experiment with an own Puppet resource type to manage the remote ZFS filesystems directly on the client server.
Please note: The Puppet provider which is described in the following examples was developed for the Solaris RAD API and not the Oracle ZFS SA API, therefore it does currently not support the ZFS SA.
The new Puppet resource type is called remote_zfs
and based on the local_zfs
type, which I published in the last post. To start using this type you need my radproviders
Puppet module and an enabled remote RAD REST service. See the document DOC-918902 from Gary Pennington and Glynn Foster.
After you configured the HTTPS port in the SMF manifest (e.g. 12303) you need to configure that address and the credentials in rad_config.json
in :
1 | { |
Now you can start using the new type:
1 | remote_zfs { "rpool/project1/video": |
As you see, it uses the same resource attributes like the original zfs
type. In the following extended example you can see that a simple resource dependency (require => Remote_zfs[...]
) is enough to relate the configuration of the networked ZFS server with the client server:
1 | remote_zfs { "rpool/project1/video": |
# puppet apply create_fs.pp Notice: /Stage[main]/Main/Remote_zfs[fileserver#rpool/project1/video]/ensure: created Notice: /Stage[main]/Main/Mount[/mnt/video]/ensure: ensure changed 'unmounted' to 'mounted'
You can use the root account in the config file, but likely you don’t like to distribute the root password of your central file server to all client servers. Luckily, you can use a non-root user by setting ZFS permissions:
# useradd zfsadmin1 # passwd zfsadmin1 (set a password) # mkdir /mnt/project1 # chown -R zfsadmin1 /mnt/project1 # zfs create rpool/project1 # zfs allow zfsadmin1 compression,create,destroy,mount,mountpoint,share,recordsize,logbias,sharenfs rpool/project1
If you now set the zfsadmin1
user in rad_config.json
the Puppet provider uses the non-root user.
If managing the storage directly with Puppet is too scary to you (which is understandable and fine), you could use Puppet in read-only or noop
mode. So you can still use Puppet reporting. Also your user only needs read permissions:
1 | remote_zfs { "rpool/project1/video": |
# puppet apply create_fs.pp Notice: /Stage[main]/Main/Remote_zfs[rpool/project1/video]/compression: current_value off, should be on (noop)
In case that you have more than one networked ZFS server, the setting of a default connection in rad_config.json
is not enough. But you can encode the connection identifier into the resource name with <connection identifier>#<filesystem>
, for example:
1 | remote_zfs { "zfsserver2#rpool/project1/video": |
The code is still quite new and not pushed to Puppet Forge, you can get it from Github:
mzachh/radproviders
remote_zfs
Do you think this could be useful? Feel free to leave a comment.
]]>But I did not really like the distributed storage configuration. E.g. a database server needs the correct ZFS properties set on the ZFS storage appliance via the web-interface or the custom CLI and also the corresponding NFS mount options in /etc/vfstab
on the database server. Maybe this sounds like no big issue to you, for example, if you are also the admin responsible for the storage appliance, or if you have a perfect collaboration with the storage team. But especially if you want to automate the storage configuration, this distribution adds a significant complexity.
Of course I wanted to manage the configuration with Puppet like a local ZFS filesystem.
I don’t yet have a ZFS SA at work to deal with, but the availability of the new RAD REST interface in Solaris 11.3 motivated me to experiment with an own Puppet resource type to manage the remote ZFS filesystems directly on the client server.
]]>
Puppet provides a nice abstraction layer of the configuration of a system. A Puppet manifest is usually easier and faster to understand than a documentation with many CLI commands. But there is no magic involved, Puppet usually executes also the same CLI commands in the background. With the debug option you can observe this command executions.
For example, the following manifest enables compression on a ZFS filesystem:
1 | zfs { 'rpool/export': |
Applying with the debug option reveals which CLI commands are executed in the background:
# puppet apply --debug fs-config.pp ... Debug: Executing '/usr/sbin/zfs list' Debug: Executing '/usr/sbin/zfs get -H -o value compression rpool/export' Debug: Executing '/usr/sbin/zfs set compression=on rpool/export' Notice: /Stage[main]/Main/Zfs[rpool/export]/compression: compression changed 'off' to 'on' ...
The Puppet provider
, the responsible component, executes the commands and parses them. This works fine as long as the output of the commands does not change. As you can imagine it can happen that such a provider executes many, many commands. For example like the default zfs
provider.
A full resource listing can be very slow. For example on my test system with 20 filesystems it needs almost a whole minute:
# time puppet resource zfs ... real 0m56.906s user 0m25.095s sys 0m30.804s
Because it executes so many processes in the background. On my system more than 600!
# puppet resource --debug zfs | grep -c Executing 641
With Solaris 11.3 a new RAD module for ZFS is available. I migrated the Puppet zfs
provider to the RAD REST-API, basically by replacing all /usr/sbin/zfs
executions with API calls. I used a new name for the type: local_zfs
.
By using the API, the execution is a lot faster (10x):
# time puppet resource local_zfs ... real 0m4.796s user 0m2.794s sys 0m0.517s
So using RAD is easily faster, but there are more advantages:
provider
can become easily “ugly”, if the CLI commands are not handy for that job. Using the API allows an increased quality of the Puppet provider.Actually this experiment is only the groundwork for a more useful Puppet resource type (remote_zfs
), which I will describe in the next blog post.
The source code and further description of local_zfs
are available on GitHub: local_zfs
Puppet provides a nice abstraction layer of the configuration of a system. A Puppet manifest is usually easier and faster to understand than a documentation with many CLI commands. But there is no magic involved, Puppet usually executes also the same CLI commands in the background. With the debug option you can observe this command executions.
]]>
zfs set quota=800g rpool/criticalfilesystem
. That’s easy to automate. Nowadays automation becomes even necessary because, the amount of ZFS filesystems is growing. And if you like to use more features you likely need to set more ZFS properties
.Puppet has an own resource type for ZFS which is pretty good, it is easy to manage all the ZFS properties, for example:
1 | zfs { 'rpool/dbfs': |
But usually managing the ZFS layer is not enough, you also want to change the permissions of the mountpoint, which is done with the file
resource type. If you change the file owner
you also have a dependency to a user
resource, etc. To make it short, real world Puppet manifests for managing ZFS are easily more complex than expected.
In the last 12 months I have written some internal manifests to manage our ZFS filesystems for our databases and I am refactoring them now. The most general use caseses I will move into my first public Puppet module. I call it zfsdir
because it is mostly an abstraction of the zfs
and file
resource type.
1 | zfsdir { 'rpool/test': |
In my old manifests I made the Puppet configuration dynamic, by marking the ZFS filesystems with custom ZFS properties. If Puppet found these custom properties, the filesystem got configured. This served us quite well, but it required that these properties got set on the target system, before the Puppet manifests got applied. Additionally in the last year, I became a big fan of hiera
. I also want to manage the ZFS configuration with hiera, that I can just add ZFS filesystems to the hiera files for a server or group. It also simplifies version control.
You only need to add a hash in hiera, for example if you use YAML:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23---
zfsdirs:
rpool/mysqlbin:
ensure: 'present'
zfs:
mountpoint: '/mysqlbin'
compression: 'on'
file:
owner: 'mysql'
group: 'bin'
rpool/oradata:
zfs:
mountpoint: '/mysqldata'
recordsize: '16K'
logbias: 'throughput'
file:
owner: 'mysql'
group: 'bin'
rpool/dumpdir:
zfs:
mountpoint: '/dumpdir'
file:
mode: '0777'
And in the manifests you only need to add the following two lines:1
2$zfsdirs = hiera_hash('zfsdirs',{})
create_resources(zfsdir, $zfsdirs)
Test the module:
# puppet apply manage-zfsdir2.pp Notice: /Stage[main]/Main/Zfsdir[rpool/myysqldata]/Zfs[rpool/mysqldata]/ensure: created Notice: /Stage[main]/Main/Zfsdir[rpool/mysqldata]/File[/mysqldata]/owner: owner changed 'root' to 'mysql' Notice: /Stage[main]/Main/Zfsdir[rpool/mysqldata]/File[/mysqldata]/group: group changed 'root' to 'bin' Notice: /Stage[main]/Main/Zfsdir[rpool/mysqlbin]/Zfs[rpool/mysqlbin]/ensure: created Notice: /Stage[main]/Main/Zfsdir[rpool/mysqlbin]/File[/mysqlbin]/owner: owner changed 'root' to 'mysql' Notice: /Stage[main]/Main/Zfsdir[rpool/mysqlbin]/File[/mysqlbin]/group: group changed 'root' to 'bin' Notice: /Stage[main]/Main/Zfsdir[rpool/dumpdir]/Zfs[rpool/dumpdir]/ensure: created Notice: /Stage[main]/Main/Zfsdir[rpool/dumpdir]/File[/dumpdir]/mode: mode changed '0755' to '0777' Notice: Finished catalog run in 1.74 seconds # zfs get all rpool/mysqldata | grep local rpool/mysqldata logbias throughput local rpool/mysqldata mountpoint /mysqldata local rpool/mysqldata recordsize 16K local # ls -all /mysqldata/ total 24 drwxr-xr-x 2 mysql bin 2 Jan 5 20:40 . drwxr-xr-x 29 root root 32 Jan 5 20:40 ..
The first version of this module can be found on GitHub:
Module source: Zfsdir
]]>zfs set quota=800g rpool/criticalfilesystem
. That’s easy to automate. Nowadays automation becomes even necessary because, the amount of ZFS filesystems is growing. And if you like to use more features you likely need to set more ZFS properties
.The blog articles were read from 107 countries, thanks!
####Top 10 countries:
I am curious how the statistics will look like in 2015.
Also this year I enjoyed visiting foreign countries for business, leisure and of course to just broaden my mind:
Happy 2015 to all of you!
]]>Session: Best Practice Configuration Management with Puppet [CON7849]
A few days before I will also attend the Automation event of the year. This time I will finally take part on my first PuppetConf in person. The line-up is amazing with speakers like Gene Kim the author of “The Phoenix Project” and many other early drivers of the DevOps community.
I am in San Francisco from 23rd September to 2nd October, if you like to meet in person just ping me.
]]>With 11.2, Puppet is shipped for the first time directly with Solaris. I think it is a quite good first release, but of course not yet perfect. Puppet itself is open source software and Oracle also published all patches and Puppet types under the CDDL, so you would have the option to maintain your own build.
The huge open source community is an important part of Puppet, maybe the most important. But with Solaris you have the additional advantage that you have this open source software also covered under your usual Oracle Premier Support. With no additional cost!
So if you identify a bug in the Puppet package, please raise a Service Request (SR) via Oracle Premier Support! According my experience the Solaris developers listen to their users. Especially if you provide your feedback via a SR, then you are even implicitly a paying customer. It makes also sense to communicate your interest to a known bug, because if a bug is a pain for several users, fixing the bug becomes more important for the Solaris developers.
Therefore, I am documenting here the bugs I faced so far and how I worked around them:
Puppet moves very fast and there are often new releases. Only time will tell, how well Oracle can keep the version of Puppet and Facter new and stable enough to satisfy enterprise customers and users which require the new features. New releases need weeks to months until they pass the Solaris QA phase. Personally I raise a SR, when I really need a feature/fix which is already available in the upstream project.
Current versions in 11.2.5.5.0:
If you use the svccfg provider, it can happen that the SMF properties get changed with every Puppet run. Because there are some verification bugs in the provider.
If you are setting up the Agent or the Master for the first time with SMF properties it can happen that the puppet.conf is not written. I think it is a bug in the SMF stencils framework, not in Puppet.
svcadm enable puppet:agent ; svcadm restart -s puppet:agent
Out of the box you can’t use hiera on the Puppet Master, you will get the following errors in /var/log/puppet/puppet-master.log
2014-08-11 07:48:47 +0000 Puppet (err): Could not autoload puppet/parser/functions/hiera: cannot load such file -- hiera 2014-08-11 07:48:47 +0000 Puppet (err): Could not autoload puppet/parser/functions/hiera: cannot load such file -- hiera on node testserver1.example.com 2014-08-11 07:48:47 +0000 Puppet (err): Could not autoload puppet/parser/functions/hiera: cannot load such file -- hiera on node testserver1.example.com
pkg install hiera
# gem install hiera
Just ping me if you need help to report a new bug, which is blocking you.
]]>With 11.2, Puppet is shipped for the first time directly with Solaris. I think it is a quite good first release, but of course not yet perfect. Puppet itself is open source software and Oracle also published all patches and Puppet types under the CDDL, so you would have the option to maintain your own build.
The huge open source community is an important part of Puppet, maybe the most important. But with Solaris you have the additional advantage that you have this open source software also covered under your usual Oracle Premier Support. With no additional cost!
]]>
At work we just push the agent with a Fabric task to the new installations. This works fine, but it’s still an unnecessary human interaction. Therefore, my idea was, to just include a simple start-script into the Golden Image which fetches the agent and the configuration from a remote server. But there is a simpler method on Solaris 11.2 available.
Additional requirements:
As shown in the Getting Started with Puppet on Oracle Solaris 11 OTN article the configuration of the Puppet agent can be done completely by setting SMF properties. For example:
# svccfg -s puppet:agent setprop config/server=master.example.com # svccfg -s puppet:agent setprop config/certname=agent1.example.com # svccfg -s puppet:agent refresh # svcadm enable puppet:agent
If the configuration is in SMF you can also directly add the config to a sysconfig profile which you can apply during the deployment of a Unified Archive.
The svccfg extract
command is useful to get the relevant XML parts, after you have set this SMF properties on a test system:
# svccfg extract svc:/application/puppet:agent
1 | <?xml version='1.0'?> |
In this example the sysconfig profile from the former blog post is extended by this XML fragment. Additionally DNS is configured, because Puppet needs to find the Puppet Master on the network.
1 | <service_bundle type="profile" name="sysconfig"> |
This configuration should be enough to enable the Puppet Agent to connect to the Puppet Master, after a new server is installed with the Unified Archive. During the first connection the Agent requests a SSL certificate, depending on your security requirements, you can manually sign them, or configure autosigning on the Master. For example by whitelisting your entire domain:
1 | *.example.com |
If the certificate of the Agent is signed, the Manifests are pulled from the Master and get applied. So you got your new installation completely under the control of the Puppet Master and you can configure your new installation with Puppet as required.
Related:
]]>There are many, many new features, but also a lot of small incremental improvements.
Some of the new key features are:
For more details, check the Solaris 11.2 blog list
I already covered some of the new features in own blog posts:
I have more posts in the pipeline, I will keep this list updated. If you are interested in a specific topic, let me know, for example by leaving a comment.
]]>There are many, many new features, but also a lot of small incremental improvements.
Some of the new key features are:
Solaris 11.2 deprecates the zfs_arc_max
kernel parameter in favor of user_reserve_hint_pct
and that’s cool.
tl;dr
ZFS has a very smart cache, the so called ARC (Adaptive replacement cache). In general the ARC consumes as much memory as it is available, it also takes care that it frees up memory if other applications need more.
In theory, this works very good, ZFS just uses available memory to speed up slow disk I/O. But it also has some side effects, if the ARC consumed almost all unused memory. Applications which request more memory need to wait, until the ARC frees up memory. For example, if you restart a big database, the startup is maybe significantly delayed, because the ARC could have used the free memory from the database shutdown in the meantime already. Additionally this database would likely request large memory pages, if ARC uses just some free segments, the memory gets easily fragmented.
That is why, many users limit the total size with the zfs_arc_max
kernel parameter. With this parameter you can configure the absolute maximum size of the ARC in bytes. For a time I personally refused to use this parameter, because it feels like “breaking the legs” of ZFS, it’s hard to standardize (absolute value) and it needs a reboot to change. But for memory-intensive applications this hard limit is simply necessary, until now.
Solaris 11.2 finally addresses this pain point, the zfs_arc_max
parameter is now deprecated. There is the new dynamic user_reserve_hint_pct
kernel parameter, which allows the system administrator to tell ZFS which percentage of the physical memory should be reserved for user applications. Without reboot!
So if you know your application will use 90% of your physical memory, you can just set this parameter to 90
.
Oracle provides a script called set_user_reserve.sh
and additional documentation. Both can be found on My Oracle Support: “Memory Management Between ZFS and Applications in Oracle Solaris 11.2 (Doc ID 1663862.1)”. The script gracefully adjusts this parameter, to give ARC enough time to shrink.
According first tests it works really nice:
# ./set_user_reserve.sh -f 50 Adjusting user_reserve_hint_pct from 0 to 50 08:43:03 AM UTC : waiting for current value : 13 to grow to target : 15 08:43:11 AM UTC : waiting for current value : 15 to grow to target : 20 ... # ./set_user_reserve.sh -f 70 ... # ./set_user_reserve.sh 0
The following chart shows the memory consumption of the ZFS ARC and available memory for user applications during my test. The line graph is the value of user_reserve_hint_pct
which is gracefully set by the script. During the test, I set it to 50%, 70% and back to 0%. At the same time I generated some I/O on the ZFS filesystem to cause caching to ARC.
As you can see, the ARC shrinks and grows according to the new parameter. The 70% reservation could never be reached, because my test system (Virtualbox) did not have enough physical memory.
For generating the chart data, I wrote the following Dtrace script:
1 | #!/usr/sbin/dtrace -s |
You can run the script during adjusting user_reserve_hint_pct
with your desired interval, for example every ten seconds:
# ./zfs_user_reserve_stat.d 10 PHYS ARC USR avail (MB) USR(%) user_reserve_hint_pct (MB) user_reserve_hint_pct (%) 2031 1136 288 14 0 0 2031 1051 364 17 304 15 2031 934 489 24 507 25 2031 863 561 27 609 30 2031 798 627 30 710 35 ...
I definitely need to play more with this parameter, but so far it looks like a big improvement to zfs_arc_max
and a very good replacement. Does it also solves your pain with the ZFS ARC? Feel free to leave a comment.
Update 10th Aug 2014
zfs_arc_max
is clearly defined as non-dynamic in the Solaris Tuneable Parameters Reference guide, I think it is not officially supported and could have some side effects.zfs_arc_max
. I got the Failed to create VM: Not enough space
error during the start of the KZ and shrinking the ARC with user_reserve_hint_pct
did not help. In my opinion this is still a bug in the KZ implementation.Update 25th Dec 2014
user_reserve_hint_pct
because KZ need a continous large free memory segment, which likely is not available.Solaris 11.2 deprecates the zfs_arc_max
kernel parameter in favor of user_reserve_hint_pct
and that’s cool.
tl;dr
ZFS has a very smart cache, the so called ARC (Adaptive replacement cache). In general the ARC consumes as much memory as it is available, it also takes care that it frees up memory if other applications need more.
In theory, this works very good, ZFS just uses available memory to speed up slow disk I/O. But it also has some side effects, if the ARC consumed almost all unused memory. Applications which request more memory need to wait, until the ARC frees up memory. For example, if you restart a big database, the startup is maybe significantly delayed, because the ARC could have used the free memory from the database shutdown in the meantime already. Additionally this database would likely request large memory pages, if ARC uses just some free segments, the memory gets easily fragmented.
]]>
With the simple unix utility time
, you can measure the full execution time of facter
. For example to know if an agent runs on a virtual hardware, the is_virtual
“fact” is populated:
# time facter is_virtual false real 0m4.008s user 0m0.853s sys 0m2.594s
In this example we see facter needs four seconds to discover is_virtual=false
. But what happens in this four long seconds? The option --timing
shows how long each fact needs to get discovered. It also shows that facter uses several other facts to resolve is_virtual
, because depending on these facts other CLI commands are used for the discovery.
# facter --timing is_virtual kernel: 7.74ms operatingsystem: 18.32ms osfamily: 20.16ms macaddress: 77.64ms hardwareisa: 19.04ms hardwaremodel: 29.89ms architecture: 30.20ms virtual: 0.69ms prtdiag: 1062.60ms virtual: 1094.83ms virtual: 4.07ms is_virtual: 1130.92ms false
From this output we see that more than ten facts are resolved but only the prtdiag
and virtual
fact need longer than one second for discovery. This information is already helpful for a Puppet facter developer. But for a regular user, who only likes to understand why facter runs slower on some hardware types, it is likely not simple enough.
Facter basically just parses the output of CLI commands, so if such a command runs slow, whole facter runs slow. The performance of the ruby code which calls these commands, usually does not have a big impact on the overall facter performance.
How to measure the execution time of the CLI commands which are called by facter? One option would be to improve the --timing
feature. But as I am on a platform which ships with Dtrace (Solaris), it is easier to write a simple Dtrace script. The following script measures the execution time of all binaries which are called by the facter ruby process:
1 | #!/usr/sbin/dtrace -s |
You need to call the Dtrace script as followed:
# dtrace -s facter-timing.d -c '/usr/local/bin/facter --timing is_virtual' ... = Dtrace statistics = Ruby facter PID: 9195 PID runtime (ms) command --- ------------ ------- 9200 405 netstat -p -f inet 9233 58 /sbin/ifconfig -a 9196 1 /usr/bin/uname -s 9234 128 /usr/perl5/bin/perl /usr/bin/kstat cpu_info 9198 57 /sbin/ifconfig -a 9219 1 /usr/bin/uname -p 9238 47 /usr/sbin/zoneadm list -cp 9243 1 /usr/bin/uname -m 9247 12 /sbin/zonename 9248 1059 /usr/platform/SUNW,SPARC-Enterprise/sbin/prtdiag 9197 1 /usr/bin/uname -v 9231 16 sh -c /usr/sbin/prtconf 2>/dev/null 9221 1088 sh -c /usr/sbin/prtdiag 2>/dev/null 9201 58 /sbin/ifconfig -a 9207 2 /usr/sbin/virtinfo -ap
In this output a sysadmin can easily see which commands are slow to run. For example prtdiag
needs more than one second and also netstat -p -f inet
needs almost 0.5 seconds. Actually I wrote the script when we faced the bug that virtinfo -ap
needed more than twenty seconds for a simple execution. This script identified the issue quickly.
/etc/resolv.conf
.Puppet can use the following interfaces on Solaris to manage system configuration.
user
Puppet provider just calls /sbin/useradd
in order to add a new user.CLI and Files are the standard interfaces on most platforms. AFAIK there are not yet RAD based Puppet modules, but SMF is already easy to use.
SMF was introduced with Solaris 10, so it is almost 10 years old. Since the initial release it has been improved successively. The primary features of SMF are:
For a further description of SMF you can read the excellent SMF tutorial of Joerg Moellenkamp.
Adapting an application to store its configuration in SMF was not that easy. With Solaris 11.2 it is, thanks to “SMF Stencils”. With Stencils you can tell SMF to create or refresh a configuration file before it starts the application. The file is assembled according to a given template file and the variables which are managed as SMF properties.
Install Puppet if not yet done:
# pkg install puppet
There is an own Puppet provider to manage SMF properties called svccfg
. Managing the service status with Puppet is already possible with the service
provider for a long time. In the following example we manage the NTP client with Puppet, we activate the slew_always
option and take care that the service is started.
1 | svccfg { 'ntp_slew_always' : |
The /etc/inet/ntp.conf
can be managed with stencils or a simple Puppet template for example. At least a minimal configuration like the following needs to exist:
1 | server 127.0.0.1 |
You can apply the manifest and check whether the ntpd
process is started with the slew_always
(--slew
) option.
# puppet apply ntp-conf.pp Notice: /Stage[main]/Main/Svccfg[ntp_slew_always]/ensure: created Notice: /Stage[main]/Main/Service[ntp]/ensure: ensure changed 'stopped' to 'running' Notice: Finished catalog run in 1.93 seconds # svcs -lp ntp | egrep "name|process" name Network Time Protocol (NTP) Version 4 process 2426 /usr/lib/inet/ntpd -p /var/run/ntp.pid -g --slew
SMF has more efficient features to manage SMF properties changes, for example SMF profiles. But in my opinion it is a huge advantage that you can document the whole configuration in a Puppet manifest, because it is easier to overview and manage. Especially if you put the manifests in a source code version control system like Git, you will get a nice change history.
]]>/etc/resolv.conf
.Getting a fully automated data center in a bigger company which does not yet have an “automation friendly culture” is very hard. Setting up a company wide configuration management framework like Puppet, Chef or Cfengine can take ages. Additionally I experienced some engineers, who are not involved in the setup of configuration management framework itself, think they should wait until the framework is 100% established. Earliest after that they plan to start thinking how they can automate their repetitive tasks.
I think this “waiting” is a huge waste of time. Everybody can start understanding and automating their repetitive tasks already without the availability of a central company wide automation framework. In this post I will cover how the Remote Execution framework Fabric can be used to basically automate your SSH sessions.
In an ideal world, it is preferred to use state oriented configuration management frameworks like Puppet, Chef or Cfengine to change the configuration of the operating system or an application. And only use Remote Execution for ad hoc tasks which are not repetitive enough to be implemented in this frameworks. But of course you can use Remote Execution for tasks which are just not yet implemented in the state oriented frameworks. For example if you like to change the configuration of an apache server you can do the following remote execution with SSH:
$ scp new-apache2.conf root@server:/etc/apache2/apache2.conf $ ssh root@server "/etc/init.d/apache2 restart" * Restarting web server apache2 [ OK ]
This is a simplified example, in a production environment you have to deal with many other issues, like permissions, password prompts etc. So even if you collect your remote executions in a shell script it gets complicated.
But luckily there are some awesome Remote Executions frameworks for your preferred scripting language like the Python based Fabric or the Ruby based Capistrano.
As I am Python fan, Fabric was my first choice. Basically Fabric is a SSH client which can be scripted with Python, with additional handy features like:
That means if you have access to servers with SSH and a workstation with a Python interpreter you can start using Fabric.
Installation is simple, use the package of your OS distribution or the pip
installer:
# pip install fabric
The Remote Execution tasks
need to be defined as Python function in the file called fabfile.py
. In my opinion you do not need extended Python skills to start using Fabric, because you can start with a simple subset of functions, like run()
,sudo()
and put()
. E.g.:
1 | from fabric.api import run,put,sudo |
If you have defined the task
you can execute it for example on three servers:
$ fab --hosts webserver1,webserver2,webserver3 deploy_apache_cfg [webserver1] Executing task 'deploy_apache_cfg' [webserver1] run: /etc/init.d/apache2 status [webserver1] Login password for 'mzach': [webserver1] out: * apache2 is running [webserver1] put: new-apache2.conf -> /tmp/new-apache2.conf [webserver1] sudo: cp /tmp/new-apache2.conf /etc/apache2/apache2.conf [webserver1] out: sudo password: [webserver1] sudo: /etc/init.d/apache2 restart [webserver1] out: sudo password: [webserver1] out: * Restarting web server apache2 [webserver1] out: ...done. [webserver2] Executing task 'deploy_apache_cfg' ... [webserver3] Executing task 'deploy_apache_cfg' ... Done. Disconnecting from webserver1... done. Disconnecting from webserver2... done. Disconnecting from webserver3... done.
This is a very basic example, with Fabric you can also use the full power of Python. For example you can use the output of remote commands and search within Python with RegEx for a specific string and maybe write this string in a local text file. You can even completely avoid using the fab
command line tool and just use Fabric directly in your own Python application. For more details check the Fabric documentation.
Using Remote Execution can of course decrease the workload of engineers, but I see also big benefits for environments with many engineers which are “waiting” for teams who own the central automation system, as mentioned at the beginning. If people start to implement their tasks for example with Fabric, they very likely start thinking about the following questions:
task
. If you have such a task you can for example first execute the verification task, then the modification task and then the verification task again. So you check quickly which servers are broken after remote execution. Or the need for proper monitoring and central remote logging becomes obvious.task
on all servers at the same time in parallel. Maybe it makes more sense to group the servers in specific groups and ensure e.g. with load balancing that there is no service interruption for the end users (better orchestration understanding).For some tasks it will definitely turn out that they should be implemented a state oriented framework like Puppet, but I am very convinced that the important lessons for using automation can be also learned with tools like Fabric.
If you have an automation friendly culture in the company the move between different automation frameworks is easy.
Please leave a comment if you like to share your feedback, experience or horror story where you have e.g. shutdown a whole data center with Remote Execution.
]]>Getting a fully automated data center in a bigger company which does not yet have an “automation friendly culture” is very hard. Setting up a company wide configuration management framework like Puppet, Chef or Cfengine can take ages. Additionally I experienced some engineers, who are not involved in the setup of configuration management framework itself, think they should wait until the framework is 100% established. Earliest after that they plan to start thinking how they can automate their repetitive tasks.
I think this “waiting” is a huge waste of time. Everybody can start understanding and automating their repetitive tasks already without the availability of a central company wide automation framework. In this post I will cover how the Remote Execution framework Fabric can be used to basically automate your SSH sessions.
]]>
A nice introduction of “Unified Archives” (“UAs”) can be found on the Oracle blog of Jesse Butler.
With AI (Automated Installer) Solaris 11 already has an install service with many great features. But if you don’t use AI very often, it can easily look complex. In this post I will show, how to deploy global zones with a simpler method.
UAs are very easy to use. After installing Solaris with your preferred installation procedure (ISO, AI, etc) you can customize the installation. When your are finished with your configuration you can create a single archive file with the following command:
# archiveadm create /data/gold-image-v1.uar
Unified archives are fully supported from Automated Installer (AI), you can just add the archive to AI. If you can’t use AI for whatever reason, you can just use the very handy AI boot media. This boot media is basically the boot image which is downloaded from the AI server during netboot, with the additional support to fetch the manifest files and sysconfig profiles from a web server.
You only need the following simple infrastructure:
In this example the web server is a solaris host with the IP 192.168.0.20
, you just need to start the web server and copy the files to /var/apache2/2.2/htdocs
:
# svcadm enable http:apache22 # tree -ug /var/apache2/2.2/htdocs /var/apache2/2.2/htdocs ├── [root bin ] testserver1.xml ├── [root bin ] sol-11_2-beta-sparc.uar └── [root bin ] archive-manifest.xml
The used AI manifest is a simple XML file:
1 | <?xml version="1.0" encoding="UTF-8"?> |
This AI manifest simple defines how to layout the rpool and where it can find the archive file.
As we don’t use a traditional AI server in this example we also need to place the sysconfig profile of the server which will be installed as XML file on the webserver.
1 | <service_bundle type="profile" name="sysconfig"> |
In this example profile the basic stuff like the hostname (“testserver1”), the root password (“solaris1”) and networking is definied. In the profile we use DHCP but you can also definie static IPs.
If you have all files on the web server you can start the installation.
Now you just need to boot the server which you like to install with the ai boot media. The boot media has the important parameters aimanifest
and profile
which need to point to the files on the webserver.
Especially the installation of SPARC systems is easy, because you can boot the system with the following command from OBP:
boot /virtual-devices@100/channel-devices@200/disk@1 - install aimanifest=http://192.168.0.20/archive-manifest.xml profile=http://192.168.0.20/testserver1.xml
In this example the AI boot media is presented as the ldom vdev /virtual-devices@100/channel-devices@200/disk@1
. But you can also boot up for example a Virtualbox Intel VM and type the parameters into the GRUB command line. Just select the “Automated Install custom” GRUB menu entry, press e
for edit and add the aimanifest
and profile
parameter like shown in the following example:
$multiboot $kern $kern -B install=true,aimanifest=http://192.168.0.20/archive-manifest.xml,profile=http://192.168.0.20/testserver1.xml
With F10
you can kick-off non-interactive installation by booting the VM. If you test on Intel don’t forget to use the x86 AI boot media and archive files.
09:30:28 Starting installation. 09:30:29 1% Preparing for Installation 09:30:32 8% target-discovery completed. 09:30:36 Selected Disk(s) : c1d0 09:30:37 9% target-selection completed. 09:30:37 10% ai-configuration completed. 09:30:37 9% var-share-dataset completed. 09:30:43 10% Beginning archive transfer 09:30:43 Commencing transfer of stream: e5128cac-a66c-657c-9332-bd0ea414d8a6-0.zfs to rpool 09:30:51 12% Transferring contents 09:31:44 Completed transfer of stream: 'e5128cac-a66c-657c-9332-bd0ea414d8a6-0.zfs' from http://192.168.0.20/sol-11_2-beta-sparc.uar 09:31:48 Archive transfer completed 09:31:51 Setting boot devices in firmware 09:31:51 91% boot-configuration completed. 09:31:52 92% setup-swap completed. 09:31:53 92% device-config completed. 09:31:59 92% apply-sysconfig completed. 09:32:00 98% boot-archive completed. 09:32:03 100% create-snapshot completed. 09:32:03 Automated Installation succeeded. 09:32:03 System will be rebooted now Automated Installation finished successfully Auto reboot enabled. The system will be rebooted now
As you can see on this (shortend) console output, the deployment of the 1.3 GB archive file needs less than 2 minutes. After the reboot the sysconfig profile is applied and the installation is finished.
Do you also think this method is easier to use than a full AI infrastructure? Feel free to share your comment or ping me.
]]>A nice introduction of “Unified Archives” (“UAs”) can be found on the Oracle blog of Jesse Butler.
With AI (Automated Installer) Solaris 11 already has an install service with many great features. But if you don’t use AI very often, it can easily look complex. In this post I will show, how to deploy global zones with a simpler method.
]]>
Many admins are already excited about configuration management frameworks like Puppet, that’s good! With these frameworks it is possible to implement all sorts of automation for many operating systems. But does this mean you should automate every task from the beginning to the end with a single tool?
If you can do it, then of course! It has some advantages, for example you need fewer different automation skills in the company. But there is always the big risk that automating a task takes just too long.
Many times provisioning of new servers is an early target for using configuration management frameworks. But implementing the whole process with Puppet can take a long time, because there are just many tasks like OS installation, security updates, OS configuration, etc.
So how to automate server provisioning with Puppet on the fast way?
In my opinion: By not using Puppet so much in the first phase. The very old idea of “Golden Images” in combination with Puppet makes it a lot easier to build an automated server provisioning process shortly. This combination also allows you to improve the automation degree in later phases with Puppet, easily.
The idea of Golden Images is that you build a “Master” server (installing OS, updates, etc) and make a “Gold” copy of it. Now you can deploy this copy (image) on many other servers. Of course a part of the configuration, like the network address, needs to be adopted.
This process is easy because you don’t need real automation tools, you can configure the Master server manually as slow as you like it. Deploying the Gold Image is usually very fast, because many times it’s just a single archive file which is extracted to the target server. Configuration mangement files and scripts usually walk sequentially over each file which needs to be modified, which is a lot slower.
Of course there are disadvantages. If you need to change a tiny bit in the Gold Image, you need to rebuild the image. And some operating systems and applications are easier to use with this approach than others. For example, applications which write the server name in 100 hidden files, are a pain for this method. Additionally you should keep the amount of the images low, because OS images easily need some GB on the disk.
Depending on the platform different supporting technologies are available:
Now just use Puppet to fight the disadvantages of Golden Images. If you use Golden Images for the static parts and Puppet for the fast changing configuration, you get the server provisioning on a quick way automated.
For example you can put the following into the Golden Image:
The Puppet manifests can handle:
The nice feature of this method is, that you can move every configuration which is wrong in the static Golden Image into a Puppet manifest. By and by the Golden Image will become more and more generic and the Puppet manifests will grow. And you will get the return on your invested time early.
Because of the multi-platform support of Puppet it’s natural that the static configurations of the different OS Golden Images move into a combined Puppet manifest.
Did you also automate the server provisioning? Would you like to share your experience or your pain? Just leave a comment or ping me.
]]>Many admins are already excited about configuration management frameworks like Puppet, that’s good! With these frameworks it is possible to implement all sorts of automation for many operating systems. But does this mean you should automate every task from the beginning to the end with a single tool?
If you can do it, then of course! It has some advantages, for example you need fewer different automation skills in the company. But there is always the big risk that automating a task takes just too long.
]]>
Why the move away from Blogger? I used Blogger with the old theme for 5 years and I was quite satisfied with the features and site performance. All for free.
But with the years the pain was growing, I spent too much time fixing the layout and the integration of source code examples was a hell. Therefore I was looking for a new blogging system which has additionally a better layout for mobile visitors.
I had a quick look at some hosted solutions like good old Wordpress, but quickly found that static web site generators are perfect for me. After playing with the popular ruby based static blog generator Octopress, I found Hexo. Hexo is a simple node.js-based static blogging framework, which is easy to learn and extend. Currently I host the generated static files on Google Cloud Storage.
The new system offers the following features:
Of course I have some further improvements on my todo list, like:
Why the move away from Blogger? I used Blogger with the old theme for 5 years and I was quite satisfied with the features and site performance. All for free.
But with the years the pain was growing, I spent too much time fixing the layout and the integration of source code examples was a hell. Therefore I was looking for a new blogging system which has additionally a better layout for mobile visitors.
]]>
But frankly beside the Solaris specific advantages, the biggest advantage for me is, that you can manage Linux, Windows and Solaris with the same automation framework. That means if you already have somebody in the company with Puppet knowledge, you likely don’t need an extra engineer for the Solaris automation. I think this is huge.
Puppet supports Solaris for a long time, thanks to Puppet Labs and various Open Source contributors, but starting with 11.2 Oracle is contributing improvements and own providers for the Solaris technologies like boot environments, network virtualization, etc.
To start using Puppet you have to install it. For testing you can download the excellent Solaris 11.2 beta Virtualbox image and install the package.
# pkg install puppet
You can create a backup boot environment for very easy rollback, before you apply further automated changes with Puppet:
1 | boot_environment { 'solaris-backup': |
# puppet apply -v create-boot_environment.pp Notice: Compiled catalog for sol112beta.zach.st in environment production in 0.03 seconds Info: Applying configuration version '1400407333' Notice: /Stage[main]/Main/Boot_environment[solaris-backup]/ensure: created Notice: Finished catalog run in 4.60 seconds # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris NR / 5.85G static 2014-04-24 13:51 solaris-backup - - 84.0K static 2014-05-18 10:02
The next Puppet manifest does the following:
1 | zfs { 'rpool/export': |
In this examples we don’t use an own dedicated Puppet server (puppetmaster), we just use the Puppet agent with local files. Of course Puppet gives you the great simulation mode (‘–noop’) for free:
# puppet apply -v --noop basic-config.pp Notice: Compiled catalog for sol112beta.zach.st in environment production in 0.25 seconds Notice: /Stage[main]/Main/Zfs[rpool/export]/quota: current_value none, should be 1G (noop) Notice: /Stage[main]/Main/Package[tmux]/ensure: current_value absent, should be present (noop) Notice: /Stage[main]/Main/Vnic[vnic0]/ensure: current_value absent, should be present (noop) Notice: Class[Main]: Would have triggered 'refresh' from 3 events Notice: Stage[main]: Would have triggered 'refresh' from 1 events Notice: Finished catalog run in 1.99 seconds
Apply manifest to the system:
# puppet apply -v basic-config.pp Notice: Compiled catalog for sol112beta.zach.st in environment production in 0.25 seconds Notice: /Stage[main]/Main/Zfs[rpool/export]/quota: quota changed 'none' to '1G' Notice: /Stage[main]/Main/Package[tmux]/ensure: created Notice: /Stage[main]/Main/Vnic[vnic0]/ensure: created Notice: Finished catalog run in 43.38 seconds
As you can see Puppet has applied the changes as expected:
# df -h /export/home Filesystem Size Used Available Capacity Mounted on rpool/export/home 1.0G 32K 1.0G 1% /export/home # pkg list tmux NAME (PUBLISHER) VERSION IFO terminal/tmux 1.8-0.175.2.0.0.37.1 i-- # dladm show-vnic LINK OVER SPEED MACADDRESS MACADDRTYPE VIDS vnic0 net0 1000 2:8:20:71:3f:72 random 0
These are just a few examples, there are also interesting possibilities with SMF stencils and I assume not all new providers are included in the 11.2 beta yet. If you find some bugs in this beta, please report them to get them fixed in the final release.
Related:
As mentioned in the last post, software dependencies can be a blocker for the adoption of configuration management frameworks. In complex or legacy environments where one has several different and antique operating systems versions, even the installation of this frameworks can be painful. Easily too painful for a time saving tool.
About a year ago, I was trying to install the Puppet agent on Solaris 10, Solaris 11 and RHEL 5.x. And it took longer than 5 minutes …
Basically the most install tutorials require to have a current operating system and/or a connection to the Internet from your target systems. These requirements were just not realistic for my target servers at my work place.
We just did not want to upgrade the whole data center, before we can install a small piece of ruby software, which maybe helps us in the future. We rather wanted to use the time to write Puppet manifests to automate our tasks, to get some effort savings which we can re-invest in upgrading old operating systems.
After some struggling I tried to compile everything from source and I was surprised how easy it was, also on Solaris. The following build guide works at least for Solaris 10, Solaris 11, RHEL 5/6, CentOS 5/6.
The idea is to build the Puppet stack and all libraries which could cause some problems to an own directory, in this example: /opt/mypuppet. So the Ruby installation of the Puppet agent does not interfere with your systems Ruby.
So if you like to “uninstall” the Puppet agent you can do it easily with:
# rm -r /opt/mypuppet
You will need systems with installed developer tools like compiliers, it makes sense to use dedicated systems for that.
If you use Solaris 10, you very likely customize the distribution heavily. Take care that at least the following packages are installed on the build system:
# pkgadd -d // SUNWbinutils SUNWarc SUNWgcc SUNWgccruntime \ SUNWhea SUNWlibmr SUNWlibm SUNWopensslr SUNWopenssl-libraries \ SUNWopenssl-include SUNWopenssl-commands SUNWsprot SUNWxcu4t
Additionally some build scripts are only tested with the gnu-tools, you can avoid some trouble if you just make “grep” and “sed” resolve to the gnu versions (e.g. overwrite the path to grep and sed with a symlink to ggrep and gsed)
With Solaris 11 it is easier:
# pkg install group/feature/developer-gnu
# yum install openssh-clients gcc openssl-devel
# wget http://pyyaml.org/download/libyaml/yaml-0.1.4.tar.gz # ./configure --prefix=/opt/mypuppet # make ; make install
Always check the supported Ruby versions in the documentation. Since Puppet 3.2 also Ruby 2.0 should be supported.
# wget http://cache.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p484.tar.gz # ./configure --prefix=/opt/mypuppet --enable-shared --with-opt-dir=/opt/mypuppet --disable-install-doc --enable-rpath # make; make install
# wget http://downloads.puppetlabs.com/facter/facter-1.7.5.tar.gz # /opt/mypuppet/bin/ruby install.rb
# wget http://downloads.puppetlabs.com/hiera/hiera-1.3.1.tar.gz # /opt/mypuppet/bin/ruby install.rb --configdir=/opt/mypuppet/etc
# wget http://downloads.puppetlabs.com/puppet/puppet-3.4.2.tar.gz # /opt/mypuppet/bin/ruby install.rb --configdir=/opt/mypuppet/etc
As long as the bug PUP-1567 is not fixed, you need to apply the following patch, to make “puppet apply” work with the alternate installation directory.
1 | --- /opt/mypuppet/lib/ruby/site_ruby/1.9.1/puppet/util/run_mode.rb 2013-12-26 18:48:18.000000000 +0100 |
To ship the build agent to the target systems, pack the files e.g. in a RPM, Solaris package or just a tar-archive:
build-server: # tar cvf puppet-v1.tar /opt/mypuppet target-server: # cd /opt ; tar xvf puppet-v1.tar
Building from source has definitely some disadvantages, e.g. the building effort. If the pre-build packages from your OS vendor or Puppetlabs are good for you, USE them.
Although we are building the agent now for almost a year from source, we always had the plan to migrate to pre-build packages. Especially after the Solaris 11.2 release, which will ship with full Puppet integration.
Also the excellent Pro Puppet book has a good coverage of the many ways to install Puppet, which you should read to save a lot of time.
]]>As mentioned in the last post, software dependencies can be a blocker for the adoption of configuration management frameworks. In complex or legacy environments where one has several different and antique operating systems versions, even the installation of this frameworks can be painful. Easily too painful for a time saving tool.
About a year ago, I was trying to install the Puppet agent on Solaris 10, Solaris 11 and RHEL 5.x. And it took longer than 5 minutes …
Basically the most install tutorials require to have a current operating system and/or a connection to the Internet from your target systems. These requirements were just not realistic for my target servers at my work place.
]]>
Many say: If you do system administration for more than one server and you don’t use a configuration management framework, you do it wrong!
I fully agree with this statement. But according my observation, everybody seems to be interested in using such frameworks, however in reality the adoption rate is quite low.
I think there are many possible blockers, but in my opinion especially the introduction phase of these frameworks is for some complex environments and companies just too hard.
Nowadays I mostly use Puppet, but I also started quite late. Mainly because of stupid reasons, for example like the fear of additional software dependencies and the fact that the popular frameworks did not use my favorite scripting language Python. Newer Python-based frameworks like Salt, need Python 2.6 which was not available on all my target servers.
Anyway, it turned out all the blockers for the introduction of these frameworks can be removed, one by one. I try to address some of the biggest blockers of the introduction of the configuration management framework Puppet in the following posts:
]]>Many say: If you do system administration for more than one server and you don’t use a configuration management framework, you do it wrong!
I fully agree with this statement. But according my observation, everybody seems to be interested in using such frameworks, however in reality the adoption rate is quite low.
Why is this?
]]>